00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2276 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3539 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.033 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.034 The recommended git tool is: git 00:00:00.034 using credential 00000000-0000-0000-0000-000000000002 00:00:00.037 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.053 Fetching changes from the remote Git repository 00:00:00.055 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.080 Using shallow fetch with depth 1 00:00:00.080 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.080 > git --version # timeout=10 00:00:00.116 > git --version # 'git version 2.39.2' 00:00:00.116 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.178 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.178 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.363 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.375 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.388 Checking out Revision bb1b9bfed281c179b06b3c39bbc702302ccac514 (FETCH_HEAD) 00:00:02.388 > git config core.sparsecheckout # timeout=10 00:00:02.399 > git read-tree -mu HEAD # timeout=10 00:00:02.414 > git checkout -f bb1b9bfed281c179b06b3c39bbc702302ccac514 # timeout=5 00:00:02.430 Commit message: "scripts/kid: add issue 3551" 00:00:02.431 > git rev-list --no-walk bb1b9bfed281c179b06b3c39bbc702302ccac514 # timeout=10 00:00:02.821 [Pipeline] Start of Pipeline 00:00:02.831 [Pipeline] library 00:00:02.832 Loading library shm_lib@master 00:00:02.832 Library shm_lib@master is cached. Copying from home. 00:00:02.848 [Pipeline] node 00:00:02.871 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.873 [Pipeline] { 00:00:02.881 [Pipeline] catchError 00:00:02.882 [Pipeline] { 00:00:02.892 [Pipeline] wrap 00:00:02.898 [Pipeline] { 00:00:02.905 [Pipeline] stage 00:00:02.907 [Pipeline] { (Prologue) 00:00:02.925 [Pipeline] echo 00:00:02.927 Node: VM-host-WFP7 00:00:02.933 [Pipeline] cleanWs 00:00:02.944 [WS-CLEANUP] Deleting project workspace... 00:00:02.944 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.952 [WS-CLEANUP] done 00:00:03.146 [Pipeline] setCustomBuildProperty 00:00:03.232 [Pipeline] httpRequest 00:00:03.839 [Pipeline] echo 00:00:03.840 Sorcerer 10.211.164.101 is alive 00:00:03.848 [Pipeline] retry 00:00:03.849 [Pipeline] { 00:00:03.861 [Pipeline] httpRequest 00:00:03.866 HttpMethod: GET 00:00:03.867 URL: http://10.211.164.101/packages/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:03.868 Sending request to url: http://10.211.164.101/packages/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:03.868 Response Code: HTTP/1.1 200 OK 00:00:03.869 Success: Status code 200 is in the accepted range: 200,404 00:00:03.869 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:04.015 [Pipeline] } 00:00:04.033 [Pipeline] // retry 00:00:04.040 [Pipeline] sh 00:00:04.327 + tar --no-same-owner -xf jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:04.342 [Pipeline] httpRequest 00:00:04.772 [Pipeline] echo 00:00:04.774 Sorcerer 10.211.164.101 is alive 00:00:04.784 [Pipeline] retry 00:00:04.786 [Pipeline] { 00:00:04.799 [Pipeline] httpRequest 00:00:04.804 HttpMethod: GET 00:00:04.805 URL: http://10.211.164.101/packages/spdk_3a02df0b15619023c86fd608f37d8adedabb7103.tar.gz 00:00:04.805 Sending request to url: http://10.211.164.101/packages/spdk_3a02df0b15619023c86fd608f37d8adedabb7103.tar.gz 00:00:04.806 Response Code: HTTP/1.1 200 OK 00:00:04.807 Success: Status code 200 is in the accepted range: 200,404 00:00:04.807 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_3a02df0b15619023c86fd608f37d8adedabb7103.tar.gz 00:00:23.729 [Pipeline] } 00:00:23.748 [Pipeline] // retry 00:00:23.756 [Pipeline] sh 00:00:24.043 + tar --no-same-owner -xf spdk_3a02df0b15619023c86fd608f37d8adedabb7103.tar.gz 00:00:26.606 [Pipeline] sh 00:00:26.892 + git -C spdk log --oneline -n5 00:00:26.892 3a02df0b1 event: add new 'mappings' parameter to static scheduler 00:00:26.892 118c273ab event: enable changing back to static scheduler 00:00:26.892 7e6d8079b lib/fuse_dispatcher: destruction sequence fixed 00:00:26.892 8dce86055 module/vfu_device/vfu_virtio_fs: EP destruction fixed 00:00:26.892 8af292d89 lib/vfu_tgt: spdk_vfu_endpoint_ops.destruct retries 00:00:26.914 [Pipeline] withCredentials 00:00:26.926 > git --version # timeout=10 00:00:26.939 > git --version # 'git version 2.39.2' 00:00:26.959 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:26.961 [Pipeline] { 00:00:26.970 [Pipeline] retry 00:00:26.972 [Pipeline] { 00:00:26.987 [Pipeline] sh 00:00:27.273 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:27.286 [Pipeline] } 00:00:27.303 [Pipeline] // retry 00:00:27.308 [Pipeline] } 00:00:27.325 [Pipeline] // withCredentials 00:00:27.334 [Pipeline] httpRequest 00:00:27.764 [Pipeline] echo 00:00:27.766 Sorcerer 10.211.164.101 is alive 00:00:27.775 [Pipeline] retry 00:00:27.778 [Pipeline] { 00:00:27.791 [Pipeline] httpRequest 00:00:27.796 HttpMethod: GET 00:00:27.797 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:27.798 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:27.813 Response Code: HTTP/1.1 200 OK 00:00:27.814 Success: Status code 200 is in the accepted range: 200,404 00:00:27.815 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:09.468 [Pipeline] } 00:01:09.486 [Pipeline] // retry 00:01:09.494 [Pipeline] sh 00:01:09.779 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:11.174 [Pipeline] sh 00:01:11.460 + git -C dpdk log --oneline -n5 00:01:11.460 caf0f5d395 version: 22.11.4 00:01:11.460 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:11.460 dc9c799c7d vhost: fix missing spinlock unlock 00:01:11.460 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:11.460 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:11.476 [Pipeline] writeFile 00:01:11.490 [Pipeline] sh 00:01:11.775 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:11.787 [Pipeline] sh 00:01:12.071 + cat autorun-spdk.conf 00:01:12.071 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.071 SPDK_RUN_ASAN=1 00:01:12.071 SPDK_RUN_UBSAN=1 00:01:12.071 SPDK_TEST_RAID=1 00:01:12.071 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:12.071 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:12.071 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:12.079 RUN_NIGHTLY=1 00:01:12.081 [Pipeline] } 00:01:12.094 [Pipeline] // stage 00:01:12.110 [Pipeline] stage 00:01:12.113 [Pipeline] { (Run VM) 00:01:12.127 [Pipeline] sh 00:01:12.421 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:12.421 + echo 'Start stage prepare_nvme.sh' 00:01:12.421 Start stage prepare_nvme.sh 00:01:12.421 + [[ -n 5 ]] 00:01:12.421 + disk_prefix=ex5 00:01:12.421 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:12.421 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:12.421 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:12.421 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.421 ++ SPDK_RUN_ASAN=1 00:01:12.421 ++ SPDK_RUN_UBSAN=1 00:01:12.421 ++ SPDK_TEST_RAID=1 00:01:12.421 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:12.421 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:12.421 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:12.421 ++ RUN_NIGHTLY=1 00:01:12.421 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:12.421 + nvme_files=() 00:01:12.421 + declare -A nvme_files 00:01:12.421 + backend_dir=/var/lib/libvirt/images/backends 00:01:12.421 + nvme_files['nvme.img']=5G 00:01:12.421 + nvme_files['nvme-cmb.img']=5G 00:01:12.421 + nvme_files['nvme-multi0.img']=4G 00:01:12.421 + nvme_files['nvme-multi1.img']=4G 00:01:12.421 + nvme_files['nvme-multi2.img']=4G 00:01:12.421 + nvme_files['nvme-openstack.img']=8G 00:01:12.421 + nvme_files['nvme-zns.img']=5G 00:01:12.421 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:12.421 + (( SPDK_TEST_FTL == 1 )) 00:01:12.421 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:12.421 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:12.421 + for nvme in "${!nvme_files[@]}" 00:01:12.421 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:12.421 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.421 + for nvme in "${!nvme_files[@]}" 00:01:12.421 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:12.421 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.421 + for nvme in "${!nvme_files[@]}" 00:01:12.422 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:12.422 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:12.422 + for nvme in "${!nvme_files[@]}" 00:01:12.422 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:12.422 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.422 + for nvme in "${!nvme_files[@]}" 00:01:12.422 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:12.422 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.422 + for nvme in "${!nvme_files[@]}" 00:01:12.422 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:12.422 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.422 + for nvme in "${!nvme_files[@]}" 00:01:12.422 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:12.681 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.681 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:12.681 + echo 'End stage prepare_nvme.sh' 00:01:12.681 End stage prepare_nvme.sh 00:01:12.693 [Pipeline] sh 00:01:12.976 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:12.977 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:01:12.977 00:01:12.977 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:12.977 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:12.977 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:12.977 HELP=0 00:01:12.977 DRY_RUN=0 00:01:12.977 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:12.977 NVME_DISKS_TYPE=nvme,nvme, 00:01:12.977 NVME_AUTO_CREATE=0 00:01:12.977 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:12.977 NVME_CMB=,, 00:01:12.977 NVME_PMR=,, 00:01:12.977 NVME_ZNS=,, 00:01:12.977 NVME_MS=,, 00:01:12.977 NVME_FDP=,, 00:01:12.977 SPDK_VAGRANT_DISTRO=fedora39 00:01:12.977 SPDK_VAGRANT_VMCPU=10 00:01:12.977 SPDK_VAGRANT_VMRAM=12288 00:01:12.977 SPDK_VAGRANT_PROVIDER=libvirt 00:01:12.977 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:12.977 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:12.977 SPDK_OPENSTACK_NETWORK=0 00:01:12.977 VAGRANT_PACKAGE_BOX=0 00:01:12.977 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:12.977 FORCE_DISTRO=true 00:01:12.977 VAGRANT_BOX_VERSION= 00:01:12.977 EXTRA_VAGRANTFILES= 00:01:12.977 NIC_MODEL=virtio 00:01:12.977 00:01:12.977 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:12.977 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:14.886 Bringing machine 'default' up with 'libvirt' provider... 00:01:15.456 ==> default: Creating image (snapshot of base box volume). 00:01:15.456 ==> default: Creating domain with the following settings... 00:01:15.456 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728954147_abf09f2205952a3a9026 00:01:15.456 ==> default: -- Domain type: kvm 00:01:15.456 ==> default: -- Cpus: 10 00:01:15.456 ==> default: -- Feature: acpi 00:01:15.456 ==> default: -- Feature: apic 00:01:15.456 ==> default: -- Feature: pae 00:01:15.456 ==> default: -- Memory: 12288M 00:01:15.456 ==> default: -- Memory Backing: hugepages: 00:01:15.456 ==> default: -- Management MAC: 00:01:15.456 ==> default: -- Loader: 00:01:15.456 ==> default: -- Nvram: 00:01:15.456 ==> default: -- Base box: spdk/fedora39 00:01:15.456 ==> default: -- Storage pool: default 00:01:15.456 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728954147_abf09f2205952a3a9026.img (20G) 00:01:15.456 ==> default: -- Volume Cache: default 00:01:15.456 ==> default: -- Kernel: 00:01:15.456 ==> default: -- Initrd: 00:01:15.456 ==> default: -- Graphics Type: vnc 00:01:15.456 ==> default: -- Graphics Port: -1 00:01:15.456 ==> default: -- Graphics IP: 127.0.0.1 00:01:15.456 ==> default: -- Graphics Password: Not defined 00:01:15.456 ==> default: -- Video Type: cirrus 00:01:15.456 ==> default: -- Video VRAM: 9216 00:01:15.456 ==> default: -- Sound Type: 00:01:15.456 ==> default: -- Keymap: en-us 00:01:15.456 ==> default: -- TPM Path: 00:01:15.456 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:15.456 ==> default: -- Command line args: 00:01:15.456 ==> default: -> value=-device, 00:01:15.456 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:15.456 ==> default: -> value=-drive, 00:01:15.456 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:15.456 ==> default: -> value=-device, 00:01:15.456 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.456 ==> default: -> value=-device, 00:01:15.456 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:15.456 ==> default: -> value=-drive, 00:01:15.456 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:15.456 ==> default: -> value=-device, 00:01:15.456 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.456 ==> default: -> value=-drive, 00:01:15.456 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:15.456 ==> default: -> value=-device, 00:01:15.456 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.456 ==> default: -> value=-drive, 00:01:15.456 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:15.456 ==> default: -> value=-device, 00:01:15.456 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.456 ==> default: Creating shared folders metadata... 00:01:15.456 ==> default: Starting domain. 00:01:16.839 ==> default: Waiting for domain to get an IP address... 00:01:34.941 ==> default: Waiting for SSH to become available... 00:01:34.941 ==> default: Configuring and enabling network interfaces... 00:01:40.219 default: SSH address: 192.168.121.247:22 00:01:40.219 default: SSH username: vagrant 00:01:40.219 default: SSH auth method: private key 00:01:42.173 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:50.301 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:55.595 ==> default: Mounting SSHFS shared folder... 00:01:58.161 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:58.161 ==> default: Checking Mount.. 00:01:59.546 ==> default: Folder Successfully Mounted! 00:01:59.546 ==> default: Running provisioner: file... 00:02:00.929 default: ~/.gitconfig => .gitconfig 00:02:01.189 00:02:01.189 SUCCESS! 00:02:01.189 00:02:01.189 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:01.189 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:01.189 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:01.189 00:02:01.199 [Pipeline] } 00:02:01.214 [Pipeline] // stage 00:02:01.225 [Pipeline] dir 00:02:01.226 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:01.228 [Pipeline] { 00:02:01.243 [Pipeline] catchError 00:02:01.245 [Pipeline] { 00:02:01.260 [Pipeline] sh 00:02:01.548 + vagrant ssh-config --host vagrant 00:02:01.548 + sed -ne /^Host/,$p 00:02:01.548 + tee ssh_conf 00:02:03.669 Host vagrant 00:02:03.669 HostName 192.168.121.247 00:02:03.669 User vagrant 00:02:03.669 Port 22 00:02:03.669 UserKnownHostsFile /dev/null 00:02:03.669 StrictHostKeyChecking no 00:02:03.669 PasswordAuthentication no 00:02:03.669 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:03.669 IdentitiesOnly yes 00:02:03.669 LogLevel FATAL 00:02:03.669 ForwardAgent yes 00:02:03.669 ForwardX11 yes 00:02:03.669 00:02:03.682 [Pipeline] withEnv 00:02:03.684 [Pipeline] { 00:02:03.696 [Pipeline] sh 00:02:03.979 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:03.979 source /etc/os-release 00:02:03.979 [[ -e /image.version ]] && img=$(< /image.version) 00:02:03.979 # Minimal, systemd-like check. 00:02:03.979 if [[ -e /.dockerenv ]]; then 00:02:03.979 # Clear garbage from the node's name: 00:02:03.979 # agt-er_autotest_547-896 -> autotest_547-896 00:02:03.979 # $HOSTNAME is the actual container id 00:02:03.980 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:03.980 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:03.980 # We can assume this is a mount from a host where container is running, 00:02:03.980 # so fetch its hostname to easily identify the target swarm worker. 00:02:03.980 container="$(< /etc/hostname) ($agent)" 00:02:03.980 else 00:02:03.980 # Fallback 00:02:03.980 container=$agent 00:02:03.980 fi 00:02:03.980 fi 00:02:03.980 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:03.980 00:02:04.253 [Pipeline] } 00:02:04.268 [Pipeline] // withEnv 00:02:04.276 [Pipeline] setCustomBuildProperty 00:02:04.292 [Pipeline] stage 00:02:04.294 [Pipeline] { (Tests) 00:02:04.311 [Pipeline] sh 00:02:04.596 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:04.870 [Pipeline] sh 00:02:05.156 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:05.431 [Pipeline] timeout 00:02:05.432 Timeout set to expire in 1 hr 30 min 00:02:05.433 [Pipeline] { 00:02:05.447 [Pipeline] sh 00:02:05.732 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:06.325 HEAD is now at 3a02df0b1 event: add new 'mappings' parameter to static scheduler 00:02:06.342 [Pipeline] sh 00:02:06.624 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:06.899 [Pipeline] sh 00:02:07.185 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:07.466 [Pipeline] sh 00:02:07.753 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:08.014 ++ readlink -f spdk_repo 00:02:08.014 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:08.014 + [[ -n /home/vagrant/spdk_repo ]] 00:02:08.014 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:08.014 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:08.014 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:08.014 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:08.014 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:08.014 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:08.014 + cd /home/vagrant/spdk_repo 00:02:08.014 + source /etc/os-release 00:02:08.014 ++ NAME='Fedora Linux' 00:02:08.014 ++ VERSION='39 (Cloud Edition)' 00:02:08.014 ++ ID=fedora 00:02:08.015 ++ VERSION_ID=39 00:02:08.015 ++ VERSION_CODENAME= 00:02:08.015 ++ PLATFORM_ID=platform:f39 00:02:08.015 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:08.015 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:08.015 ++ LOGO=fedora-logo-icon 00:02:08.015 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:08.015 ++ HOME_URL=https://fedoraproject.org/ 00:02:08.015 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:08.015 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:08.015 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:08.015 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:08.015 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:08.015 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:08.015 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:08.015 ++ SUPPORT_END=2024-11-12 00:02:08.015 ++ VARIANT='Cloud Edition' 00:02:08.015 ++ VARIANT_ID=cloud 00:02:08.015 + uname -a 00:02:08.015 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:08.015 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:08.586 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:08.586 Hugepages 00:02:08.586 node hugesize free / total 00:02:08.586 node0 1048576kB 0 / 0 00:02:08.586 node0 2048kB 0 / 0 00:02:08.586 00:02:08.586 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:08.586 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:08.586 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:08.586 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:08.586 + rm -f /tmp/spdk-ld-path 00:02:08.586 + source autorun-spdk.conf 00:02:08.586 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.586 ++ SPDK_RUN_ASAN=1 00:02:08.586 ++ SPDK_RUN_UBSAN=1 00:02:08.586 ++ SPDK_TEST_RAID=1 00:02:08.586 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:08.586 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:08.586 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:08.586 ++ RUN_NIGHTLY=1 00:02:08.586 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:08.586 + [[ -n '' ]] 00:02:08.586 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:08.586 + for M in /var/spdk/build-*-manifest.txt 00:02:08.586 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:08.586 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.586 + for M in /var/spdk/build-*-manifest.txt 00:02:08.586 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:08.586 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.586 + for M in /var/spdk/build-*-manifest.txt 00:02:08.586 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:08.586 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.847 ++ uname 00:02:08.847 + [[ Linux == \L\i\n\u\x ]] 00:02:08.847 + sudo dmesg -T 00:02:08.847 + sudo dmesg --clear 00:02:08.847 + dmesg_pid=6164 00:02:08.847 + [[ Fedora Linux == FreeBSD ]] 00:02:08.847 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.847 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.847 + sudo dmesg -Tw 00:02:08.847 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:08.847 + [[ -x /usr/src/fio-static/fio ]] 00:02:08.847 + export FIO_BIN=/usr/src/fio-static/fio 00:02:08.847 + FIO_BIN=/usr/src/fio-static/fio 00:02:08.847 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:08.847 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:08.847 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:08.847 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.847 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.847 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:08.847 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.847 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.847 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:08.847 Test configuration: 00:02:08.847 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.847 SPDK_RUN_ASAN=1 00:02:08.847 SPDK_RUN_UBSAN=1 00:02:08.847 SPDK_TEST_RAID=1 00:02:08.847 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:08.847 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:08.847 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:08.847 RUN_NIGHTLY=1 01:03:21 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:08.847 01:03:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:08.847 01:03:21 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:08.847 01:03:21 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:08.847 01:03:21 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:08.847 01:03:21 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:08.847 01:03:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.847 01:03:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.847 01:03:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.847 01:03:21 -- paths/export.sh@5 -- $ export PATH 00:02:08.847 01:03:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.847 01:03:21 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:08.847 01:03:21 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:08.847 01:03:21 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728954201.XXXXXX 00:02:08.847 01:03:21 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728954201.MG7wd2 00:02:08.847 01:03:21 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:08.847 01:03:21 -- common/autobuild_common.sh@492 -- $ '[' -n v22.11.4 ']' 00:02:08.847 01:03:21 -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:08.847 01:03:21 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:08.847 01:03:21 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:08.847 01:03:21 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:08.847 01:03:21 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:08.847 01:03:21 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:08.847 01:03:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.107 01:03:21 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:09.107 01:03:21 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:09.107 01:03:21 -- pm/common@17 -- $ local monitor 00:02:09.107 01:03:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.107 01:03:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.107 01:03:21 -- pm/common@25 -- $ sleep 1 00:02:09.107 01:03:21 -- pm/common@21 -- $ date +%s 00:02:09.107 01:03:21 -- pm/common@21 -- $ date +%s 00:02:09.107 01:03:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728954201 00:02:09.107 01:03:21 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728954201 00:02:09.107 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728954201_collect-cpu-load.pm.log 00:02:09.107 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728954201_collect-vmstat.pm.log 00:02:10.049 01:03:22 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:10.049 01:03:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:10.049 01:03:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:10.049 01:03:22 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:10.049 01:03:22 -- spdk/autobuild.sh@16 -- $ date -u 00:02:10.049 Tue Oct 15 01:03:22 AM UTC 2024 00:02:10.049 01:03:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:10.049 v25.01-pre-64-g3a02df0b1 00:02:10.049 01:03:22 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:10.049 01:03:22 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:10.049 01:03:22 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:10.049 01:03:22 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:10.049 01:03:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.049 ************************************ 00:02:10.049 START TEST asan 00:02:10.049 ************************************ 00:02:10.049 using asan 00:02:10.049 01:03:22 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:10.049 00:02:10.049 real 0m0.001s 00:02:10.049 user 0m0.000s 00:02:10.049 sys 0m0.000s 00:02:10.049 01:03:22 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:10.049 01:03:22 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:10.049 ************************************ 00:02:10.049 END TEST asan 00:02:10.049 ************************************ 00:02:10.049 01:03:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:10.049 01:03:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:10.049 01:03:22 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:10.049 01:03:22 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:10.049 01:03:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.049 ************************************ 00:02:10.049 START TEST ubsan 00:02:10.049 ************************************ 00:02:10.049 using ubsan 00:02:10.049 01:03:22 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:10.049 00:02:10.049 real 0m0.000s 00:02:10.049 user 0m0.000s 00:02:10.049 sys 0m0.000s 00:02:10.049 01:03:22 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:10.049 01:03:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:10.049 ************************************ 00:02:10.049 END TEST ubsan 00:02:10.049 ************************************ 00:02:10.049 01:03:22 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:10.049 01:03:22 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:10.049 01:03:22 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:10.049 01:03:22 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:10.049 01:03:22 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:10.049 01:03:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.049 ************************************ 00:02:10.049 START TEST build_native_dpdk 00:02:10.049 ************************************ 00:02:10.049 01:03:22 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:10.049 01:03:22 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:10.049 01:03:22 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:10.049 01:03:22 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:10.049 01:03:22 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:10.049 01:03:22 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:10.049 01:03:22 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:10.049 01:03:22 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:10.049 01:03:22 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:10.049 01:03:22 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:10.049 01:03:22 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:10.049 01:03:22 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:10.049 01:03:22 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:10.311 caf0f5d395 version: 22.11.4 00:02:10.311 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:10.311 dc9c799c7d vhost: fix missing spinlock unlock 00:02:10.311 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:10.311 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:10.311 patching file config/rte_config.h 00:02:10.311 Hunk #1 succeeded at 60 (offset 1 line). 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:10.311 patching file lib/pcapng/rte_pcapng.c 00:02:10.311 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:10.311 01:03:22 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:10.311 01:03:22 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:10.312 01:03:22 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:10.312 01:03:22 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:10.312 01:03:22 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:10.312 01:03:22 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:10.312 01:03:22 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:10.312 01:03:22 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:10.312 01:03:22 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:10.312 01:03:22 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:15.597 The Meson build system 00:02:15.597 Version: 1.5.0 00:02:15.597 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:15.597 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:15.597 Build type: native build 00:02:15.597 Program cat found: YES (/usr/bin/cat) 00:02:15.597 Project name: DPDK 00:02:15.597 Project version: 22.11.4 00:02:15.597 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:15.597 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:15.597 Host machine cpu family: x86_64 00:02:15.597 Host machine cpu: x86_64 00:02:15.597 Message: ## Building in Developer Mode ## 00:02:15.597 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:15.597 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:15.597 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:15.597 Program objdump found: YES (/usr/bin/objdump) 00:02:15.597 Program python3 found: YES (/usr/bin/python3) 00:02:15.597 Program cat found: YES (/usr/bin/cat) 00:02:15.597 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:15.597 Checking for size of "void *" : 8 00:02:15.597 Checking for size of "void *" : 8 (cached) 00:02:15.597 Library m found: YES 00:02:15.597 Library numa found: YES 00:02:15.598 Has header "numaif.h" : YES 00:02:15.598 Library fdt found: NO 00:02:15.598 Library execinfo found: NO 00:02:15.598 Has header "execinfo.h" : YES 00:02:15.598 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:15.598 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:15.598 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:15.598 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:15.598 Run-time dependency openssl found: YES 3.1.1 00:02:15.598 Run-time dependency libpcap found: YES 1.10.4 00:02:15.598 Has header "pcap.h" with dependency libpcap: YES 00:02:15.598 Compiler for C supports arguments -Wcast-qual: YES 00:02:15.598 Compiler for C supports arguments -Wdeprecated: YES 00:02:15.598 Compiler for C supports arguments -Wformat: YES 00:02:15.598 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:15.598 Compiler for C supports arguments -Wformat-security: NO 00:02:15.598 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:15.598 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:15.598 Compiler for C supports arguments -Wnested-externs: YES 00:02:15.598 Compiler for C supports arguments -Wold-style-definition: YES 00:02:15.598 Compiler for C supports arguments -Wpointer-arith: YES 00:02:15.598 Compiler for C supports arguments -Wsign-compare: YES 00:02:15.598 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:15.598 Compiler for C supports arguments -Wundef: YES 00:02:15.598 Compiler for C supports arguments -Wwrite-strings: YES 00:02:15.598 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:15.598 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:15.598 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:15.598 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:15.598 Compiler for C supports arguments -mavx512f: YES 00:02:15.598 Checking if "AVX512 checking" compiles: YES 00:02:15.598 Fetching value of define "__SSE4_2__" : 1 00:02:15.598 Fetching value of define "__AES__" : 1 00:02:15.598 Fetching value of define "__AVX__" : 1 00:02:15.598 Fetching value of define "__AVX2__" : 1 00:02:15.598 Fetching value of define "__AVX512BW__" : 1 00:02:15.598 Fetching value of define "__AVX512CD__" : 1 00:02:15.598 Fetching value of define "__AVX512DQ__" : 1 00:02:15.598 Fetching value of define "__AVX512F__" : 1 00:02:15.598 Fetching value of define "__AVX512VL__" : 1 00:02:15.598 Fetching value of define "__PCLMUL__" : 1 00:02:15.598 Fetching value of define "__RDRND__" : 1 00:02:15.598 Fetching value of define "__RDSEED__" : 1 00:02:15.598 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:15.598 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:15.598 Message: lib/kvargs: Defining dependency "kvargs" 00:02:15.598 Message: lib/telemetry: Defining dependency "telemetry" 00:02:15.598 Checking for function "getentropy" : YES 00:02:15.598 Message: lib/eal: Defining dependency "eal" 00:02:15.598 Message: lib/ring: Defining dependency "ring" 00:02:15.598 Message: lib/rcu: Defining dependency "rcu" 00:02:15.598 Message: lib/mempool: Defining dependency "mempool" 00:02:15.598 Message: lib/mbuf: Defining dependency "mbuf" 00:02:15.598 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:15.598 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:15.598 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:15.598 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:15.598 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:15.598 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:15.598 Compiler for C supports arguments -mpclmul: YES 00:02:15.598 Compiler for C supports arguments -maes: YES 00:02:15.598 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:15.598 Compiler for C supports arguments -mavx512bw: YES 00:02:15.598 Compiler for C supports arguments -mavx512dq: YES 00:02:15.598 Compiler for C supports arguments -mavx512vl: YES 00:02:15.598 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:15.598 Compiler for C supports arguments -mavx2: YES 00:02:15.598 Compiler for C supports arguments -mavx: YES 00:02:15.598 Message: lib/net: Defining dependency "net" 00:02:15.598 Message: lib/meter: Defining dependency "meter" 00:02:15.598 Message: lib/ethdev: Defining dependency "ethdev" 00:02:15.598 Message: lib/pci: Defining dependency "pci" 00:02:15.598 Message: lib/cmdline: Defining dependency "cmdline" 00:02:15.598 Message: lib/metrics: Defining dependency "metrics" 00:02:15.598 Message: lib/hash: Defining dependency "hash" 00:02:15.598 Message: lib/timer: Defining dependency "timer" 00:02:15.598 Fetching value of define "__AVX2__" : 1 (cached) 00:02:15.598 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:15.598 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:15.598 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:15.598 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:15.598 Message: lib/acl: Defining dependency "acl" 00:02:15.598 Message: lib/bbdev: Defining dependency "bbdev" 00:02:15.598 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:15.598 Run-time dependency libelf found: YES 0.191 00:02:15.598 Message: lib/bpf: Defining dependency "bpf" 00:02:15.598 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:15.598 Message: lib/compressdev: Defining dependency "compressdev" 00:02:15.598 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:15.598 Message: lib/distributor: Defining dependency "distributor" 00:02:15.598 Message: lib/efd: Defining dependency "efd" 00:02:15.598 Message: lib/eventdev: Defining dependency "eventdev" 00:02:15.598 Message: lib/gpudev: Defining dependency "gpudev" 00:02:15.598 Message: lib/gro: Defining dependency "gro" 00:02:15.598 Message: lib/gso: Defining dependency "gso" 00:02:15.598 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:15.598 Message: lib/jobstats: Defining dependency "jobstats" 00:02:15.598 Message: lib/latencystats: Defining dependency "latencystats" 00:02:15.598 Message: lib/lpm: Defining dependency "lpm" 00:02:15.598 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:15.598 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:15.598 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:15.598 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:15.598 Message: lib/member: Defining dependency "member" 00:02:15.598 Message: lib/pcapng: Defining dependency "pcapng" 00:02:15.598 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:15.598 Message: lib/power: Defining dependency "power" 00:02:15.598 Message: lib/rawdev: Defining dependency "rawdev" 00:02:15.598 Message: lib/regexdev: Defining dependency "regexdev" 00:02:15.598 Message: lib/dmadev: Defining dependency "dmadev" 00:02:15.598 Message: lib/rib: Defining dependency "rib" 00:02:15.598 Message: lib/reorder: Defining dependency "reorder" 00:02:15.598 Message: lib/sched: Defining dependency "sched" 00:02:15.598 Message: lib/security: Defining dependency "security" 00:02:15.598 Message: lib/stack: Defining dependency "stack" 00:02:15.598 Has header "linux/userfaultfd.h" : YES 00:02:15.598 Message: lib/vhost: Defining dependency "vhost" 00:02:15.598 Message: lib/ipsec: Defining dependency "ipsec" 00:02:15.598 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:15.598 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:15.598 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:15.598 Message: lib/fib: Defining dependency "fib" 00:02:15.598 Message: lib/port: Defining dependency "port" 00:02:15.598 Message: lib/pdump: Defining dependency "pdump" 00:02:15.598 Message: lib/table: Defining dependency "table" 00:02:15.598 Message: lib/pipeline: Defining dependency "pipeline" 00:02:15.598 Message: lib/graph: Defining dependency "graph" 00:02:15.598 Message: lib/node: Defining dependency "node" 00:02:15.598 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:15.598 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:15.598 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:15.598 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:15.598 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:15.598 Compiler for C supports arguments -Wno-unused-value: YES 00:02:15.598 Compiler for C supports arguments -Wno-format: YES 00:02:15.598 Compiler for C supports arguments -Wno-format-security: YES 00:02:15.598 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:15.598 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:16.982 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:16.982 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:16.982 Fetching value of define "__AVX2__" : 1 (cached) 00:02:16.982 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:16.982 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:16.982 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:16.982 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:16.982 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:16.982 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:16.982 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:16.982 Configuring doxy-api.conf using configuration 00:02:16.982 Program sphinx-build found: NO 00:02:16.982 Configuring rte_build_config.h using configuration 00:02:16.982 Message: 00:02:16.982 ================= 00:02:16.982 Applications Enabled 00:02:16.982 ================= 00:02:16.982 00:02:16.982 apps: 00:02:16.982 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:16.982 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:16.982 test-security-perf, 00:02:16.982 00:02:16.982 Message: 00:02:16.982 ================= 00:02:16.982 Libraries Enabled 00:02:16.982 ================= 00:02:16.982 00:02:16.982 libs: 00:02:16.982 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:16.982 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:16.982 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:16.982 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:16.982 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:16.982 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:16.982 table, pipeline, graph, node, 00:02:16.982 00:02:16.982 Message: 00:02:16.982 =============== 00:02:16.982 Drivers Enabled 00:02:16.982 =============== 00:02:16.982 00:02:16.982 common: 00:02:16.982 00:02:16.982 bus: 00:02:16.982 pci, vdev, 00:02:16.982 mempool: 00:02:16.982 ring, 00:02:16.982 dma: 00:02:16.982 00:02:16.982 net: 00:02:16.982 i40e, 00:02:16.982 raw: 00:02:16.982 00:02:16.982 crypto: 00:02:16.982 00:02:16.982 compress: 00:02:16.982 00:02:16.982 regex: 00:02:16.982 00:02:16.982 vdpa: 00:02:16.982 00:02:16.982 event: 00:02:16.982 00:02:16.982 baseband: 00:02:16.982 00:02:16.982 gpu: 00:02:16.982 00:02:16.982 00:02:16.982 Message: 00:02:16.982 ================= 00:02:16.982 Content Skipped 00:02:16.982 ================= 00:02:16.982 00:02:16.982 apps: 00:02:16.982 00:02:16.982 libs: 00:02:16.982 kni: explicitly disabled via build config (deprecated lib) 00:02:16.982 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:16.982 00:02:16.982 drivers: 00:02:16.982 common/cpt: not in enabled drivers build config 00:02:16.982 common/dpaax: not in enabled drivers build config 00:02:16.982 common/iavf: not in enabled drivers build config 00:02:16.982 common/idpf: not in enabled drivers build config 00:02:16.982 common/mvep: not in enabled drivers build config 00:02:16.983 common/octeontx: not in enabled drivers build config 00:02:16.983 bus/auxiliary: not in enabled drivers build config 00:02:16.983 bus/dpaa: not in enabled drivers build config 00:02:16.983 bus/fslmc: not in enabled drivers build config 00:02:16.983 bus/ifpga: not in enabled drivers build config 00:02:16.983 bus/vmbus: not in enabled drivers build config 00:02:16.983 common/cnxk: not in enabled drivers build config 00:02:16.983 common/mlx5: not in enabled drivers build config 00:02:16.983 common/qat: not in enabled drivers build config 00:02:16.983 common/sfc_efx: not in enabled drivers build config 00:02:16.983 mempool/bucket: not in enabled drivers build config 00:02:16.983 mempool/cnxk: not in enabled drivers build config 00:02:16.983 mempool/dpaa: not in enabled drivers build config 00:02:16.983 mempool/dpaa2: not in enabled drivers build config 00:02:16.983 mempool/octeontx: not in enabled drivers build config 00:02:16.983 mempool/stack: not in enabled drivers build config 00:02:16.983 dma/cnxk: not in enabled drivers build config 00:02:16.983 dma/dpaa: not in enabled drivers build config 00:02:16.983 dma/dpaa2: not in enabled drivers build config 00:02:16.983 dma/hisilicon: not in enabled drivers build config 00:02:16.983 dma/idxd: not in enabled drivers build config 00:02:16.983 dma/ioat: not in enabled drivers build config 00:02:16.983 dma/skeleton: not in enabled drivers build config 00:02:16.983 net/af_packet: not in enabled drivers build config 00:02:16.983 net/af_xdp: not in enabled drivers build config 00:02:16.983 net/ark: not in enabled drivers build config 00:02:16.983 net/atlantic: not in enabled drivers build config 00:02:16.983 net/avp: not in enabled drivers build config 00:02:16.983 net/axgbe: not in enabled drivers build config 00:02:16.983 net/bnx2x: not in enabled drivers build config 00:02:16.983 net/bnxt: not in enabled drivers build config 00:02:16.983 net/bonding: not in enabled drivers build config 00:02:16.983 net/cnxk: not in enabled drivers build config 00:02:16.983 net/cxgbe: not in enabled drivers build config 00:02:16.983 net/dpaa: not in enabled drivers build config 00:02:16.983 net/dpaa2: not in enabled drivers build config 00:02:16.983 net/e1000: not in enabled drivers build config 00:02:16.983 net/ena: not in enabled drivers build config 00:02:16.983 net/enetc: not in enabled drivers build config 00:02:16.983 net/enetfec: not in enabled drivers build config 00:02:16.983 net/enic: not in enabled drivers build config 00:02:16.983 net/failsafe: not in enabled drivers build config 00:02:16.983 net/fm10k: not in enabled drivers build config 00:02:16.983 net/gve: not in enabled drivers build config 00:02:16.983 net/hinic: not in enabled drivers build config 00:02:16.983 net/hns3: not in enabled drivers build config 00:02:16.983 net/iavf: not in enabled drivers build config 00:02:16.983 net/ice: not in enabled drivers build config 00:02:16.983 net/idpf: not in enabled drivers build config 00:02:16.983 net/igc: not in enabled drivers build config 00:02:16.983 net/ionic: not in enabled drivers build config 00:02:16.983 net/ipn3ke: not in enabled drivers build config 00:02:16.983 net/ixgbe: not in enabled drivers build config 00:02:16.983 net/kni: not in enabled drivers build config 00:02:16.983 net/liquidio: not in enabled drivers build config 00:02:16.983 net/mana: not in enabled drivers build config 00:02:16.983 net/memif: not in enabled drivers build config 00:02:16.983 net/mlx4: not in enabled drivers build config 00:02:16.983 net/mlx5: not in enabled drivers build config 00:02:16.983 net/mvneta: not in enabled drivers build config 00:02:16.983 net/mvpp2: not in enabled drivers build config 00:02:16.983 net/netvsc: not in enabled drivers build config 00:02:16.983 net/nfb: not in enabled drivers build config 00:02:16.983 net/nfp: not in enabled drivers build config 00:02:16.983 net/ngbe: not in enabled drivers build config 00:02:16.983 net/null: not in enabled drivers build config 00:02:16.983 net/octeontx: not in enabled drivers build config 00:02:16.983 net/octeon_ep: not in enabled drivers build config 00:02:16.983 net/pcap: not in enabled drivers build config 00:02:16.983 net/pfe: not in enabled drivers build config 00:02:16.983 net/qede: not in enabled drivers build config 00:02:16.983 net/ring: not in enabled drivers build config 00:02:16.983 net/sfc: not in enabled drivers build config 00:02:16.983 net/softnic: not in enabled drivers build config 00:02:16.983 net/tap: not in enabled drivers build config 00:02:16.983 net/thunderx: not in enabled drivers build config 00:02:16.983 net/txgbe: not in enabled drivers build config 00:02:16.983 net/vdev_netvsc: not in enabled drivers build config 00:02:16.983 net/vhost: not in enabled drivers build config 00:02:16.983 net/virtio: not in enabled drivers build config 00:02:16.983 net/vmxnet3: not in enabled drivers build config 00:02:16.983 raw/cnxk_bphy: not in enabled drivers build config 00:02:16.983 raw/cnxk_gpio: not in enabled drivers build config 00:02:16.983 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:16.983 raw/ifpga: not in enabled drivers build config 00:02:16.983 raw/ntb: not in enabled drivers build config 00:02:16.983 raw/skeleton: not in enabled drivers build config 00:02:16.983 crypto/armv8: not in enabled drivers build config 00:02:16.983 crypto/bcmfs: not in enabled drivers build config 00:02:16.983 crypto/caam_jr: not in enabled drivers build config 00:02:16.983 crypto/ccp: not in enabled drivers build config 00:02:16.983 crypto/cnxk: not in enabled drivers build config 00:02:16.983 crypto/dpaa_sec: not in enabled drivers build config 00:02:16.983 crypto/dpaa2_sec: not in enabled drivers build config 00:02:16.983 crypto/ipsec_mb: not in enabled drivers build config 00:02:16.983 crypto/mlx5: not in enabled drivers build config 00:02:16.983 crypto/mvsam: not in enabled drivers build config 00:02:16.983 crypto/nitrox: not in enabled drivers build config 00:02:16.983 crypto/null: not in enabled drivers build config 00:02:16.983 crypto/octeontx: not in enabled drivers build config 00:02:16.983 crypto/openssl: not in enabled drivers build config 00:02:16.983 crypto/scheduler: not in enabled drivers build config 00:02:16.983 crypto/uadk: not in enabled drivers build config 00:02:16.983 crypto/virtio: not in enabled drivers build config 00:02:16.983 compress/isal: not in enabled drivers build config 00:02:16.983 compress/mlx5: not in enabled drivers build config 00:02:16.983 compress/octeontx: not in enabled drivers build config 00:02:16.983 compress/zlib: not in enabled drivers build config 00:02:16.983 regex/mlx5: not in enabled drivers build config 00:02:16.983 regex/cn9k: not in enabled drivers build config 00:02:16.983 vdpa/ifc: not in enabled drivers build config 00:02:16.983 vdpa/mlx5: not in enabled drivers build config 00:02:16.983 vdpa/sfc: not in enabled drivers build config 00:02:16.983 event/cnxk: not in enabled drivers build config 00:02:16.983 event/dlb2: not in enabled drivers build config 00:02:16.983 event/dpaa: not in enabled drivers build config 00:02:16.983 event/dpaa2: not in enabled drivers build config 00:02:16.983 event/dsw: not in enabled drivers build config 00:02:16.983 event/opdl: not in enabled drivers build config 00:02:16.983 event/skeleton: not in enabled drivers build config 00:02:16.983 event/sw: not in enabled drivers build config 00:02:16.983 event/octeontx: not in enabled drivers build config 00:02:16.983 baseband/acc: not in enabled drivers build config 00:02:16.983 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:16.983 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:16.983 baseband/la12xx: not in enabled drivers build config 00:02:16.983 baseband/null: not in enabled drivers build config 00:02:16.983 baseband/turbo_sw: not in enabled drivers build config 00:02:16.983 gpu/cuda: not in enabled drivers build config 00:02:16.983 00:02:16.983 00:02:16.983 Build targets in project: 311 00:02:16.983 00:02:16.983 DPDK 22.11.4 00:02:16.983 00:02:16.983 User defined options 00:02:16.983 libdir : lib 00:02:16.983 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:16.983 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:16.983 c_link_args : 00:02:16.983 enable_docs : false 00:02:16.983 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:16.983 enable_kmods : false 00:02:16.983 machine : native 00:02:16.983 tests : false 00:02:16.983 00:02:16.983 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:16.983 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:17.244 01:03:29 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:17.244 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:17.244 [1/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:17.244 [2/740] Generating lib/rte_kvargs_def with a custom command 00:02:17.244 [3/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:17.244 [4/740] Generating lib/rte_telemetry_def with a custom command 00:02:17.244 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:17.244 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:17.503 [7/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:17.503 [8/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:17.503 [9/740] Linking static target lib/librte_kvargs.a 00:02:17.504 [10/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:17.504 [11/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:17.504 [12/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:17.504 [13/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:17.504 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:17.504 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:17.504 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:17.504 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:17.504 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:17.504 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:17.504 [20/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.764 [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:17.764 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:17.764 [23/740] Linking target lib/librte_kvargs.so.23.0 00:02:17.764 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:17.764 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:17.764 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:17.764 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:17.764 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:17.764 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:17.764 [30/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:17.764 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:17.764 [32/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:17.764 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:17.764 [34/740] Linking static target lib/librte_telemetry.a 00:02:18.024 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:18.024 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:18.024 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:18.024 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:18.024 [39/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:18.024 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:18.024 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:18.024 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:18.284 [43/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:18.284 [44/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:18.284 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:18.284 [46/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.284 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:18.284 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:18.284 [49/740] Linking target lib/librte_telemetry.so.23.0 00:02:18.284 [50/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:18.284 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:18.284 [52/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:18.284 [53/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:18.284 [54/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:18.284 [55/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:18.284 [56/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:18.284 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:18.284 [58/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:18.284 [59/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:18.284 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:18.284 [61/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:18.284 [62/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:18.545 [63/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:18.545 [64/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:18.545 [65/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:18.545 [66/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:18.545 [67/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:18.545 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:18.545 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:18.545 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:18.545 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:18.545 [72/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:18.545 [73/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:18.545 [74/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:18.545 [75/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:18.545 [76/740] Generating lib/rte_eal_def with a custom command 00:02:18.545 [77/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:18.545 [78/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:18.545 [79/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:18.545 [80/740] Generating lib/rte_eal_mingw with a custom command 00:02:18.545 [81/740] Generating lib/rte_ring_def with a custom command 00:02:18.545 [82/740] Generating lib/rte_ring_mingw with a custom command 00:02:18.805 [83/740] Generating lib/rte_rcu_def with a custom command 00:02:18.805 [84/740] Generating lib/rte_rcu_mingw with a custom command 00:02:18.805 [85/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:18.805 [86/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:18.805 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:18.805 [88/740] Linking static target lib/librte_ring.a 00:02:18.805 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:18.805 [90/740] Generating lib/rte_mempool_def with a custom command 00:02:18.805 [91/740] Generating lib/rte_mempool_mingw with a custom command 00:02:18.805 [92/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:19.072 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:19.072 [94/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.072 [95/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:19.072 [96/740] Generating lib/rte_mbuf_def with a custom command 00:02:19.072 [97/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:19.072 [98/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:19.072 [99/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:19.072 [100/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:19.072 [101/740] Linking static target lib/librte_eal.a 00:02:19.342 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:19.342 [103/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:19.342 [104/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:19.342 [105/740] Linking static target lib/librte_rcu.a 00:02:19.342 [106/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:19.342 [107/740] Linking static target lib/librte_mempool.a 00:02:19.602 [108/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:19.602 [109/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:19.602 [110/740] Generating lib/rte_net_def with a custom command 00:02:19.602 [111/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:19.602 [112/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:19.602 [113/740] Generating lib/rte_net_mingw with a custom command 00:02:19.603 [114/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:19.603 [115/740] Generating lib/rte_meter_def with a custom command 00:02:19.603 [116/740] Generating lib/rte_meter_mingw with a custom command 00:02:19.603 [117/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.603 [118/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:19.603 [119/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:19.603 [120/740] Linking static target lib/librte_meter.a 00:02:19.863 [121/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:19.863 [122/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:19.863 [123/740] Linking static target lib/librte_net.a 00:02:19.863 [124/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.863 [125/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:19.863 [126/740] Linking static target lib/librte_mbuf.a 00:02:20.123 [127/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:20.123 [128/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.123 [129/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:20.123 [130/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:20.123 [131/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.123 [132/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:20.123 [133/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:20.383 [134/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:20.383 [135/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.383 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:20.643 [137/740] Generating lib/rte_ethdev_def with a custom command 00:02:20.643 [138/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:20.643 [139/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:20.643 [140/740] Generating lib/rte_pci_def with a custom command 00:02:20.643 [141/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:20.643 [142/740] Linking static target lib/librte_pci.a 00:02:20.643 [143/740] Generating lib/rte_pci_mingw with a custom command 00:02:20.643 [144/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:20.643 [145/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:20.643 [146/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:20.643 [147/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:20.903 [148/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.903 [149/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:20.903 [150/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:20.903 [151/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:20.903 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:20.903 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:20.903 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:20.903 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:20.903 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:20.903 [157/740] Generating lib/rte_cmdline_def with a custom command 00:02:20.903 [158/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:20.903 [159/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:20.903 [160/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:20.903 [161/740] Generating lib/rte_metrics_def with a custom command 00:02:20.903 [162/740] Generating lib/rte_metrics_mingw with a custom command 00:02:21.164 [163/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:21.164 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:21.164 [165/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:21.164 [166/740] Generating lib/rte_hash_def with a custom command 00:02:21.164 [167/740] Generating lib/rte_hash_mingw with a custom command 00:02:21.164 [168/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:21.164 [169/740] Generating lib/rte_timer_def with a custom command 00:02:21.164 [170/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:21.164 [171/740] Generating lib/rte_timer_mingw with a custom command 00:02:21.164 [172/740] Linking static target lib/librte_cmdline.a 00:02:21.164 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:21.424 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:21.424 [175/740] Linking static target lib/librte_metrics.a 00:02:21.424 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:21.424 [177/740] Linking static target lib/librte_timer.a 00:02:21.685 [178/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.685 [179/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.685 [180/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:21.685 [181/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:21.946 [182/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:21.946 [183/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.946 [184/740] Generating lib/rte_acl_def with a custom command 00:02:21.946 [185/740] Generating lib/rte_acl_mingw with a custom command 00:02:21.946 [186/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:21.946 [187/740] Generating lib/rte_bbdev_def with a custom command 00:02:21.946 [188/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:21.946 [189/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:21.946 [190/740] Linking static target lib/librte_ethdev.a 00:02:21.946 [191/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:22.206 [192/740] Generating lib/rte_bitratestats_def with a custom command 00:02:22.206 [193/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:22.467 [194/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:22.467 [195/740] Linking static target lib/librte_bitratestats.a 00:02:22.467 [196/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:22.467 [197/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:22.467 [198/740] Linking static target lib/librte_bbdev.a 00:02:22.467 [199/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.727 [200/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:22.727 [201/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:22.987 [202/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.987 [203/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:22.987 [204/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:23.248 [205/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:23.248 [206/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:23.508 [207/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:23.508 [208/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:23.508 [209/740] Linking static target lib/librte_hash.a 00:02:23.508 [210/740] Generating lib/rte_bpf_def with a custom command 00:02:23.508 [211/740] Generating lib/rte_bpf_mingw with a custom command 00:02:23.768 [212/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:23.768 [213/740] Generating lib/rte_cfgfile_def with a custom command 00:02:23.768 [214/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:23.768 [215/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:23.768 [216/740] Linking static target lib/librte_cfgfile.a 00:02:23.768 [217/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:23.768 [218/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:24.029 [219/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.029 [220/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.029 [221/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:24.029 [222/740] Generating lib/rte_compressdev_def with a custom command 00:02:24.029 [223/740] Linking static target lib/librte_bpf.a 00:02:24.029 [224/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:24.029 [225/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:24.289 [226/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:24.289 [227/740] Generating lib/rte_cryptodev_def with a custom command 00:02:24.289 [228/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:24.289 [229/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:24.289 [230/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:24.289 [231/740] Linking static target lib/librte_compressdev.a 00:02:24.289 [232/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.289 [233/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:24.550 [234/740] Linking static target lib/librte_acl.a 00:02:24.550 [235/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:24.550 [236/740] Generating lib/rte_distributor_def with a custom command 00:02:24.550 [237/740] Generating lib/rte_distributor_mingw with a custom command 00:02:24.550 [238/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:24.550 [239/740] Generating lib/rte_efd_def with a custom command 00:02:24.550 [240/740] Generating lib/rte_efd_mingw with a custom command 00:02:24.811 [241/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.811 [242/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:24.811 [243/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:24.811 [244/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.811 [245/740] Linking target lib/librte_eal.so.23.0 00:02:24.811 [246/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:24.811 [247/740] Linking static target lib/librte_distributor.a 00:02:25.071 [248/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:25.071 [249/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:25.071 [250/740] Linking target lib/librte_ring.so.23.0 00:02:25.071 [251/740] Linking target lib/librte_meter.so.23.0 00:02:25.071 [252/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.071 [253/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:25.071 [254/740] Linking target lib/librte_pci.so.23.0 00:02:25.071 [255/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.071 [256/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:25.071 [257/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:25.071 [258/740] Linking target lib/librte_rcu.so.23.0 00:02:25.071 [259/740] Linking target lib/librte_mempool.so.23.0 00:02:25.071 [260/740] Linking target lib/librte_timer.so.23.0 00:02:25.071 [261/740] Linking target lib/librte_acl.so.23.0 00:02:25.332 [262/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:25.332 [263/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:25.332 [264/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:25.332 [265/740] Linking target lib/librte_cfgfile.so.23.0 00:02:25.332 [266/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:25.332 [267/740] Linking target lib/librte_mbuf.so.23.0 00:02:25.332 [268/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:25.332 [269/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:25.332 [270/740] Linking target lib/librte_net.so.23.0 00:02:25.592 [271/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:25.592 [272/740] Linking target lib/librte_cmdline.so.23.0 00:02:25.592 [273/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:25.592 [274/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:25.592 [275/740] Linking target lib/librte_hash.so.23.0 00:02:25.592 [276/740] Linking target lib/librte_bbdev.so.23.0 00:02:25.592 [277/740] Linking target lib/librte_compressdev.so.23.0 00:02:25.592 [278/740] Linking static target lib/librte_efd.a 00:02:25.592 [279/740] Linking target lib/librte_distributor.so.23.0 00:02:25.592 [280/740] Generating lib/rte_eventdev_def with a custom command 00:02:25.592 [281/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:25.592 [282/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:25.592 [283/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:25.592 [284/740] Generating lib/rte_gpudev_def with a custom command 00:02:25.852 [285/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:25.852 [286/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:25.852 [287/740] Linking static target lib/librte_cryptodev.a 00:02:25.852 [288/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.852 [289/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.852 [290/740] Linking target lib/librte_efd.so.23.0 00:02:25.852 [291/740] Linking target lib/librte_ethdev.so.23.0 00:02:26.111 [292/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:26.111 [293/740] Linking target lib/librte_metrics.so.23.0 00:02:26.111 [294/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:26.111 [295/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:26.111 [296/740] Linking target lib/librte_bitratestats.so.23.0 00:02:26.111 [297/740] Linking target lib/librte_bpf.so.23.0 00:02:26.111 [298/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:26.111 [299/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:26.111 [300/740] Generating lib/rte_gro_def with a custom command 00:02:26.372 [301/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:26.372 [302/740] Generating lib/rte_gro_mingw with a custom command 00:02:26.372 [303/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:26.372 [304/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:26.372 [305/740] Linking static target lib/librte_gpudev.a 00:02:26.372 [306/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:26.372 [307/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:26.631 [308/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:26.631 [309/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:26.631 [310/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:26.631 [311/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:26.631 [312/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:26.631 [313/740] Generating lib/rte_gso_mingw with a custom command 00:02:26.631 [314/740] Generating lib/rte_gso_def with a custom command 00:02:26.631 [315/740] Linking static target lib/librte_gro.a 00:02:26.631 [316/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:26.891 [317/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:26.891 [318/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:26.891 [319/740] Linking static target lib/librte_eventdev.a 00:02:26.891 [320/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.891 [321/740] Linking target lib/librte_gro.so.23.0 00:02:27.152 [322/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.152 [323/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:27.152 [324/740] Linking static target lib/librte_gso.a 00:02:27.152 [325/740] Linking target lib/librte_gpudev.so.23.0 00:02:27.152 [326/740] Generating lib/rte_ip_frag_def with a custom command 00:02:27.152 [327/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:27.152 [328/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:27.152 [329/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:27.152 [330/740] Generating lib/rte_jobstats_def with a custom command 00:02:27.152 [331/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:27.152 [332/740] Generating lib/rte_latencystats_def with a custom command 00:02:27.152 [333/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.152 [334/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:27.152 [335/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:27.152 [336/740] Linking static target lib/librte_jobstats.a 00:02:27.152 [337/740] Linking target lib/librte_gso.so.23.0 00:02:27.152 [338/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:27.427 [339/740] Generating lib/rte_lpm_def with a custom command 00:02:27.427 [340/740] Generating lib/rte_lpm_mingw with a custom command 00:02:27.427 [341/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:27.427 [342/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:27.427 [343/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.427 [344/740] Linking target lib/librte_jobstats.so.23.0 00:02:27.689 [345/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:27.689 [346/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.689 [347/740] Linking static target lib/librte_ip_frag.a 00:02:27.689 [348/740] Linking target lib/librte_cryptodev.so.23.0 00:02:27.689 [349/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:27.689 [350/740] Linking static target lib/librte_latencystats.a 00:02:27.689 [351/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:27.689 [352/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:27.689 [353/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:27.689 [354/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:27.689 [355/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:27.689 [356/740] Generating lib/rte_member_mingw with a custom command 00:02:27.948 [357/740] Generating lib/rte_member_def with a custom command 00:02:27.948 [358/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.948 [359/740] Generating lib/rte_pcapng_def with a custom command 00:02:27.948 [360/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.948 [361/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:27.948 [362/740] Linking target lib/librte_latencystats.so.23.0 00:02:27.948 [363/740] Linking target lib/librte_ip_frag.so.23.0 00:02:27.948 [364/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:27.948 [365/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:27.948 [366/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:27.949 [367/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:28.208 [368/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:28.208 [369/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:28.208 [370/740] Linking static target lib/librte_lpm.a 00:02:28.208 [371/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:28.208 [372/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:28.208 [373/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:28.208 [374/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:28.208 [375/740] Generating lib/rte_power_def with a custom command 00:02:28.468 [376/740] Generating lib/rte_power_mingw with a custom command 00:02:28.468 [377/740] Generating lib/rte_rawdev_def with a custom command 00:02:28.468 [378/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:28.468 [379/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:28.468 [380/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.468 [381/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.468 [382/740] Generating lib/rte_regexdev_def with a custom command 00:02:28.468 [383/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:28.468 [384/740] Linking target lib/librte_lpm.so.23.0 00:02:28.468 [385/740] Linking target lib/librte_eventdev.so.23.0 00:02:28.468 [386/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:28.468 [387/740] Linking static target lib/librte_pcapng.a 00:02:28.468 [388/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:28.468 [389/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:28.468 [390/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:28.468 [391/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:28.728 [392/740] Generating lib/rte_dmadev_def with a custom command 00:02:28.728 [393/740] Generating lib/rte_rib_def with a custom command 00:02:28.728 [394/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:28.728 [395/740] Linking static target lib/librte_rawdev.a 00:02:28.728 [396/740] Generating lib/rte_rib_mingw with a custom command 00:02:28.728 [397/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:28.728 [398/740] Generating lib/rte_reorder_def with a custom command 00:02:28.728 [399/740] Generating lib/rte_reorder_mingw with a custom command 00:02:28.728 [400/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.728 [401/740] Linking target lib/librte_pcapng.so.23.0 00:02:28.728 [402/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:28.728 [403/740] Linking static target lib/librte_dmadev.a 00:02:28.728 [404/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:28.728 [405/740] Linking static target lib/librte_power.a 00:02:28.987 [406/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:28.987 [407/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:28.987 [408/740] Linking static target lib/librte_regexdev.a 00:02:28.987 [409/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:28.987 [410/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.987 [411/740] Linking target lib/librte_rawdev.so.23.0 00:02:28.987 [412/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:28.987 [413/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:29.246 [414/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:29.246 [415/740] Generating lib/rte_sched_def with a custom command 00:02:29.246 [416/740] Generating lib/rte_sched_mingw with a custom command 00:02:29.246 [417/740] Generating lib/rte_security_def with a custom command 00:02:29.246 [418/740] Generating lib/rte_security_mingw with a custom command 00:02:29.246 [419/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:29.246 [420/740] Linking static target lib/librte_reorder.a 00:02:29.246 [421/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.246 [422/740] Linking target lib/librte_dmadev.so.23.0 00:02:29.246 [423/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:29.246 [424/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:29.246 [425/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:29.246 [426/740] Linking static target lib/librte_member.a 00:02:29.246 [427/740] Generating lib/rte_stack_def with a custom command 00:02:29.246 [428/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:29.246 [429/740] Linking static target lib/librte_stack.a 00:02:29.246 [430/740] Generating lib/rte_stack_mingw with a custom command 00:02:29.246 [431/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:29.246 [432/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:29.246 [433/740] Linking static target lib/librte_rib.a 00:02:29.506 [434/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.506 [435/740] Linking target lib/librte_reorder.so.23.0 00:02:29.506 [436/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:29.506 [437/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.506 [438/740] Linking target lib/librte_regexdev.so.23.0 00:02:29.506 [439/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.506 [440/740] Linking target lib/librte_stack.so.23.0 00:02:29.506 [441/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.506 [442/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.506 [443/740] Linking target lib/librte_power.so.23.0 00:02:29.506 [444/740] Linking target lib/librte_member.so.23.0 00:02:29.765 [445/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.765 [446/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:29.765 [447/740] Linking static target lib/librte_security.a 00:02:29.765 [448/740] Linking target lib/librte_rib.so.23.0 00:02:29.765 [449/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:29.765 [450/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:29.765 [451/740] Generating lib/rte_vhost_def with a custom command 00:02:29.765 [452/740] Generating lib/rte_vhost_mingw with a custom command 00:02:29.765 [453/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:30.024 [454/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:30.024 [455/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.024 [456/740] Linking target lib/librte_security.so.23.0 00:02:30.024 [457/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:30.024 [458/740] Linking static target lib/librte_sched.a 00:02:30.284 [459/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:30.284 [460/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:30.284 [461/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:30.284 [462/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.544 [463/740] Generating lib/rte_ipsec_def with a custom command 00:02:30.544 [464/740] Linking target lib/librte_sched.so.23.0 00:02:30.544 [465/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:30.544 [466/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:30.544 [467/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:30.544 [468/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:30.544 [469/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:30.804 [470/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:30.804 [471/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:30.804 [472/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:30.804 [473/740] Generating lib/rte_fib_def with a custom command 00:02:30.804 [474/740] Generating lib/rte_fib_mingw with a custom command 00:02:31.064 [475/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:31.064 [476/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:31.323 [477/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:31.323 [478/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:31.323 [479/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:31.323 [480/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:31.323 [481/740] Linking static target lib/librte_ipsec.a 00:02:31.323 [482/740] Linking static target lib/librte_fib.a 00:02:31.582 [483/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:31.582 [484/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:31.582 [485/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:31.582 [486/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.582 [487/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:31.582 [488/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.582 [489/740] Linking target lib/librte_fib.so.23.0 00:02:31.582 [490/740] Linking target lib/librte_ipsec.so.23.0 00:02:31.841 [491/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:32.101 [492/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:32.101 [493/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:32.101 [494/740] Generating lib/rte_port_def with a custom command 00:02:32.361 [495/740] Generating lib/rte_port_mingw with a custom command 00:02:32.361 [496/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:32.361 [497/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:32.361 [498/740] Generating lib/rte_pdump_def with a custom command 00:02:32.361 [499/740] Generating lib/rte_pdump_mingw with a custom command 00:02:32.361 [500/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:32.361 [501/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:32.361 [502/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:32.361 [503/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:32.361 [504/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:32.621 [505/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:32.880 [506/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:32.880 [507/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:32.880 [508/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:32.880 [509/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:32.880 [510/740] Linking static target lib/librte_port.a 00:02:32.880 [511/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:32.880 [512/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:33.140 [513/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:33.140 [514/740] Linking static target lib/librte_pdump.a 00:02:33.399 [515/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:33.399 [516/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.399 [517/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:33.399 [518/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.399 [519/740] Linking target lib/librte_pdump.so.23.0 00:02:33.399 [520/740] Generating lib/rte_table_def with a custom command 00:02:33.399 [521/740] Linking target lib/librte_port.so.23.0 00:02:33.399 [522/740] Generating lib/rte_table_mingw with a custom command 00:02:33.399 [523/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:33.658 [524/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:33.658 [525/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:33.658 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:33.658 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:33.658 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:33.917 [529/740] Generating lib/rte_pipeline_def with a custom command 00:02:33.917 [530/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:33.917 [531/740] Linking static target lib/librte_table.a 00:02:33.917 [532/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:33.917 [533/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:34.180 [534/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:34.180 [535/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:34.180 [536/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.180 [537/740] Linking target lib/librte_table.so.23.0 00:02:34.439 [538/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:34.439 [539/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:34.439 [540/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:34.439 [541/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:34.439 [542/740] Generating lib/rte_graph_def with a custom command 00:02:34.698 [543/740] Generating lib/rte_graph_mingw with a custom command 00:02:34.698 [544/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:34.698 [545/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:34.698 [546/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:34.698 [547/740] Linking static target lib/librte_graph.a 00:02:34.956 [548/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:34.956 [549/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:35.215 [550/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:35.215 [551/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:35.215 [552/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:35.476 [553/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:35.476 [554/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:35.476 [555/740] Generating lib/rte_node_def with a custom command 00:02:35.476 [556/740] Generating lib/rte_node_mingw with a custom command 00:02:35.476 [557/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.476 [558/740] Linking target lib/librte_graph.so.23.0 00:02:35.476 [559/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:35.476 [560/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:35.476 [561/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:35.476 [562/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:35.736 [563/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:35.736 [564/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:35.736 [565/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:35.736 [566/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:35.736 [567/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:35.736 [568/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:35.736 [569/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:35.736 [570/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:35.736 [571/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:35.736 [572/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:35.736 [573/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:35.995 [574/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:35.995 [575/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:35.995 [576/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:35.995 [577/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:35.995 [578/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:35.995 [579/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:35.995 [580/740] Linking static target lib/librte_node.a 00:02:35.995 [581/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:35.995 [582/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:35.995 [583/740] Linking static target drivers/librte_bus_vdev.a 00:02:36.255 [584/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:36.255 [585/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:36.255 [586/740] Linking static target drivers/librte_bus_pci.a 00:02:36.255 [587/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.255 [588/740] Linking target lib/librte_node.so.23.0 00:02:36.255 [589/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:36.255 [590/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:36.255 [591/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.255 [592/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:36.515 [593/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:36.515 [594/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:36.515 [595/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.515 [596/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:36.515 [597/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:36.515 [598/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:36.515 [599/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:36.515 [600/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:36.774 [601/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:36.774 [602/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:36.774 [603/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:36.774 [604/740] Linking static target drivers/librte_mempool_ring.a 00:02:36.774 [605/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:36.774 [606/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:36.774 [607/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:37.033 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:37.292 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:37.551 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:37.551 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:37.551 [612/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:38.120 [613/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:38.120 [614/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:38.120 [615/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:38.120 [616/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:38.379 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:38.379 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:38.379 [619/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:38.379 [620/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:38.638 [621/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:38.897 [622/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:39.160 [623/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:39.437 [624/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:39.437 [625/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:39.694 [626/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:39.694 [627/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:39.694 [628/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:39.694 [629/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:39.694 [630/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:39.694 [631/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:39.694 [632/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:39.694 [633/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:40.261 [634/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:40.261 [635/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:40.262 [636/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:40.521 [637/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:40.521 [638/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:40.521 [639/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:40.780 [640/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:40.780 [641/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:40.780 [642/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:40.780 [643/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:40.780 [644/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:40.780 [645/740] Linking static target drivers/librte_net_i40e.a 00:02:40.780 [646/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:40.780 [647/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:41.039 [648/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:41.299 [649/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:41.299 [650/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:41.299 [651/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.299 [652/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:41.558 [653/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:41.558 [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:41.558 [655/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:41.558 [656/740] Linking static target lib/librte_vhost.a 00:02:41.558 [657/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:41.817 [658/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:41.817 [659/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:41.817 [660/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:41.817 [661/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:41.817 [662/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:42.076 [663/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:42.076 [664/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:42.077 [665/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:42.336 [666/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:42.336 [667/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:42.597 [668/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.597 [669/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:42.597 [670/740] Linking target lib/librte_vhost.so.23.0 00:02:42.856 [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:42.856 [672/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:43.115 [673/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:43.115 [674/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:43.115 [675/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:43.374 [676/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:43.374 [677/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:43.374 [678/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:43.374 [679/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:43.633 [680/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:43.633 [681/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:43.633 [682/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:43.633 [683/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:43.633 [684/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:43.893 [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:44.152 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:44.152 [687/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:44.152 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:44.152 [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:44.152 [690/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:44.411 [691/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:44.411 [692/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:44.411 [693/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:44.411 [694/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:44.670 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:44.670 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:45.239 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:45.239 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:45.239 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:45.239 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:45.497 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:45.497 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:45.756 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:45.756 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:46.016 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:46.016 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:46.275 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:46.534 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:46.534 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:46.534 [710/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:46.792 [711/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:46.792 [712/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:46.793 [713/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:47.052 [714/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:47.052 [715/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:47.052 [716/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:47.311 [717/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:47.311 [718/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:47.311 [719/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:47.311 [720/740] Linking static target lib/librte_pipeline.a 00:02:47.570 [721/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:47.829 [722/740] Linking target app/dpdk-proc-info 00:02:47.829 [723/740] Linking target app/dpdk-test-bbdev 00:02:47.829 [724/740] Linking target app/dpdk-test-cmdline 00:02:47.829 [725/740] Linking target app/dpdk-test-acl 00:02:47.829 [726/740] Linking target app/dpdk-test-crypto-perf 00:02:47.829 [727/740] Linking target app/dpdk-test-compress-perf 00:02:47.829 [728/740] Linking target app/dpdk-pdump 00:02:47.829 [729/740] Linking target app/dpdk-dumpcap 00:02:48.088 [730/740] Linking target app/dpdk-test-eventdev 00:02:48.088 [731/740] Linking target app/dpdk-test-flow-perf 00:02:48.088 [732/740] Linking target app/dpdk-test-fib 00:02:48.088 [733/740] Linking target app/dpdk-test-pipeline 00:02:48.348 [734/740] Linking target app/dpdk-test-gpudev 00:02:48.348 [735/740] Linking target app/dpdk-testpmd 00:02:48.348 [736/740] Linking target app/dpdk-test-sad 00:02:48.348 [737/740] Linking target app/dpdk-test-regex 00:02:48.348 [738/740] Linking target app/dpdk-test-security-perf 00:02:52.541 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.541 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:52.541 01:04:05 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:52.541 01:04:05 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:52.541 01:04:05 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:52.541 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:52.541 [0/1] Installing files. 00:02:53.130 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:53.130 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:53.131 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:53.132 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:53.132 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.132 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.132 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.132 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.132 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.132 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.133 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.133 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.133 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.133 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.133 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.133 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.133 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.133 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.133 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:53.395 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:53.395 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:53.395 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.395 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:53.396 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.396 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.396 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.396 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.396 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.396 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.396 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.396 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.396 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.396 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.396 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.396 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.396 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.396 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.396 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.396 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.396 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.396 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.397 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:53.398 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:53.398 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:53.398 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:53.398 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:53.398 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:53.398 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:53.398 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:53.398 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:53.398 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:53.398 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:53.398 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:53.398 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:53.398 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:53.398 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:53.398 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:53.398 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:53.398 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:53.399 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:53.399 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:53.399 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:53.399 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:53.399 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:53.399 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:53.399 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:53.399 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:53.399 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:53.399 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:53.399 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:53.399 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:53.399 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:53.399 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:53.399 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:53.399 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:53.399 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:53.399 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:53.399 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:53.399 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:53.399 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:53.399 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:53.399 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:53.399 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:53.399 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:53.399 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:53.399 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:53.399 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:53.399 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:53.399 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:53.399 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:53.399 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:53.399 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:53.399 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:53.399 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:53.399 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:53.399 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:53.399 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:53.399 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:53.399 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:53.399 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:53.399 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:53.399 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:53.399 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:53.399 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:53.399 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:53.399 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:53.399 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:53.399 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:53.399 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:53.399 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:53.399 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:53.399 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:53.399 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:53.399 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:53.399 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:53.399 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:53.399 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:53.399 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:53.399 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:53.399 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:53.399 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:53.399 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:53.399 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:53.399 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:53.399 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:53.399 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:53.399 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:53.399 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:53.399 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:53.399 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:53.399 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:53.399 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:53.399 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:53.399 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:53.399 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:53.399 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:53.399 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:53.399 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:53.399 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:53.399 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:53.399 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:53.399 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:53.399 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:53.399 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:53.399 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:53.399 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:53.399 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:53.399 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:53.399 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:53.399 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:53.399 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:53.399 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:53.399 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:53.399 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:53.399 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:53.400 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:53.400 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:53.400 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:53.400 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:53.400 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:53.400 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:53.400 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:53.400 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:53.400 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:53.400 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:53.400 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:53.400 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:53.400 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:53.659 01:04:06 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:53.659 01:04:06 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:53.659 00:02:53.659 real 0m43.354s 00:02:53.659 user 4m19.016s 00:02:53.659 sys 0m50.520s 00:02:53.659 01:04:06 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:53.659 01:04:06 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:53.659 ************************************ 00:02:53.659 END TEST build_native_dpdk 00:02:53.659 ************************************ 00:02:53.659 01:04:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:53.659 01:04:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:53.659 01:04:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:53.659 01:04:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:53.659 01:04:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:53.659 01:04:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:53.659 01:04:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:53.659 01:04:06 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:02:53.659 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:53.917 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.917 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:53.917 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:54.174 Using 'verbs' RDMA provider 00:03:09.999 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:24.957 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:25.526 Creating mk/config.mk...done. 00:03:25.526 Creating mk/cc.flags.mk...done. 00:03:25.526 Type 'make' to build. 00:03:25.786 01:04:38 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:25.786 01:04:38 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:25.786 01:04:38 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:25.786 01:04:38 -- common/autotest_common.sh@10 -- $ set +x 00:03:25.786 ************************************ 00:03:25.786 START TEST make 00:03:25.786 ************************************ 00:03:25.786 01:04:38 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:26.045 make[1]: Nothing to be done for 'all'. 00:04:12.742 CC lib/ut/ut.o 00:04:12.742 CC lib/ut_mock/mock.o 00:04:12.742 CC lib/log/log.o 00:04:12.742 CC lib/log/log_deprecated.o 00:04:12.742 CC lib/log/log_flags.o 00:04:12.742 LIB libspdk_ut.a 00:04:12.742 LIB libspdk_ut_mock.a 00:04:12.742 SO libspdk_ut.so.2.0 00:04:12.742 LIB libspdk_log.a 00:04:12.742 SO libspdk_ut_mock.so.6.0 00:04:12.742 SYMLINK libspdk_ut.so 00:04:12.742 SO libspdk_log.so.7.1 00:04:12.742 SYMLINK libspdk_ut_mock.so 00:04:12.742 SYMLINK libspdk_log.so 00:04:12.742 CC lib/dma/dma.o 00:04:12.742 CC lib/util/base64.o 00:04:12.742 CC lib/util/cpuset.o 00:04:12.742 CC lib/util/bit_array.o 00:04:12.742 CC lib/util/crc16.o 00:04:12.742 CC lib/util/crc32c.o 00:04:12.742 CC lib/util/crc32.o 00:04:12.742 CC lib/ioat/ioat.o 00:04:12.742 CXX lib/trace_parser/trace.o 00:04:12.742 CC lib/vfio_user/host/vfio_user_pci.o 00:04:12.742 CC lib/vfio_user/host/vfio_user.o 00:04:12.742 CC lib/util/crc32_ieee.o 00:04:12.742 CC lib/util/crc64.o 00:04:12.742 CC lib/util/dif.o 00:04:12.742 LIB libspdk_dma.a 00:04:12.742 SO libspdk_dma.so.5.0 00:04:12.742 CC lib/util/fd.o 00:04:12.742 CC lib/util/fd_group.o 00:04:12.742 CC lib/util/file.o 00:04:12.742 SYMLINK libspdk_dma.so 00:04:12.742 CC lib/util/hexlify.o 00:04:12.742 CC lib/util/iov.o 00:04:12.742 LIB libspdk_ioat.a 00:04:12.742 CC lib/util/math.o 00:04:12.742 SO libspdk_ioat.so.7.0 00:04:12.742 LIB libspdk_vfio_user.a 00:04:12.742 CC lib/util/net.o 00:04:12.742 SO libspdk_vfio_user.so.5.0 00:04:12.742 SYMLINK libspdk_ioat.so 00:04:12.742 CC lib/util/pipe.o 00:04:12.742 CC lib/util/strerror_tls.o 00:04:12.742 CC lib/util/string.o 00:04:12.742 SYMLINK libspdk_vfio_user.so 00:04:12.742 CC lib/util/uuid.o 00:04:12.743 CC lib/util/xor.o 00:04:12.743 CC lib/util/zipf.o 00:04:12.743 CC lib/util/md5.o 00:04:13.002 LIB libspdk_util.a 00:04:13.002 SO libspdk_util.so.10.0 00:04:13.260 LIB libspdk_trace_parser.a 00:04:13.260 SYMLINK libspdk_util.so 00:04:13.260 SO libspdk_trace_parser.so.6.0 00:04:13.260 SYMLINK libspdk_trace_parser.so 00:04:13.260 CC lib/json/json_parse.o 00:04:13.260 CC lib/rdma_utils/rdma_utils.o 00:04:13.260 CC lib/json/json_util.o 00:04:13.260 CC lib/json/json_write.o 00:04:13.260 CC lib/env_dpdk/env.o 00:04:13.260 CC lib/rdma_provider/common.o 00:04:13.260 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:13.260 CC lib/conf/conf.o 00:04:13.260 CC lib/idxd/idxd.o 00:04:13.260 CC lib/vmd/vmd.o 00:04:13.518 CC lib/idxd/idxd_user.o 00:04:13.518 LIB libspdk_rdma_provider.a 00:04:13.518 SO libspdk_rdma_provider.so.6.0 00:04:13.518 LIB libspdk_conf.a 00:04:13.518 CC lib/idxd/idxd_kernel.o 00:04:13.519 SO libspdk_conf.so.6.0 00:04:13.519 CC lib/env_dpdk/memory.o 00:04:13.519 LIB libspdk_rdma_utils.a 00:04:13.519 SYMLINK libspdk_rdma_provider.so 00:04:13.519 CC lib/env_dpdk/pci.o 00:04:13.777 LIB libspdk_json.a 00:04:13.777 SO libspdk_rdma_utils.so.1.0 00:04:13.777 SYMLINK libspdk_conf.so 00:04:13.777 CC lib/vmd/led.o 00:04:13.777 SO libspdk_json.so.6.0 00:04:13.777 SYMLINK libspdk_rdma_utils.so 00:04:13.777 CC lib/env_dpdk/init.o 00:04:13.777 SYMLINK libspdk_json.so 00:04:13.777 CC lib/env_dpdk/threads.o 00:04:13.777 CC lib/env_dpdk/pci_ioat.o 00:04:13.777 CC lib/env_dpdk/pci_virtio.o 00:04:13.777 CC lib/env_dpdk/pci_vmd.o 00:04:13.777 CC lib/env_dpdk/pci_idxd.o 00:04:13.777 CC lib/env_dpdk/pci_event.o 00:04:14.035 CC lib/env_dpdk/sigbus_handler.o 00:04:14.035 CC lib/env_dpdk/pci_dpdk.o 00:04:14.035 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:14.035 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:14.035 LIB libspdk_idxd.a 00:04:14.035 SO libspdk_idxd.so.12.1 00:04:14.035 LIB libspdk_vmd.a 00:04:14.294 SO libspdk_vmd.so.6.0 00:04:14.294 SYMLINK libspdk_idxd.so 00:04:14.294 CC lib/jsonrpc/jsonrpc_server.o 00:04:14.294 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:14.294 CC lib/jsonrpc/jsonrpc_client.o 00:04:14.294 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:14.294 SYMLINK libspdk_vmd.so 00:04:14.553 LIB libspdk_jsonrpc.a 00:04:14.553 SO libspdk_jsonrpc.so.6.0 00:04:14.553 SYMLINK libspdk_jsonrpc.so 00:04:15.122 LIB libspdk_env_dpdk.a 00:04:15.122 CC lib/rpc/rpc.o 00:04:15.122 SO libspdk_env_dpdk.so.15.0 00:04:15.122 SYMLINK libspdk_env_dpdk.so 00:04:15.387 LIB libspdk_rpc.a 00:04:15.387 SO libspdk_rpc.so.6.0 00:04:15.387 SYMLINK libspdk_rpc.so 00:04:15.648 CC lib/notify/notify.o 00:04:15.648 CC lib/notify/notify_rpc.o 00:04:15.648 CC lib/trace/trace.o 00:04:15.648 CC lib/trace/trace_flags.o 00:04:15.648 CC lib/trace/trace_rpc.o 00:04:15.648 CC lib/keyring/keyring.o 00:04:15.648 CC lib/keyring/keyring_rpc.o 00:04:15.907 LIB libspdk_notify.a 00:04:15.907 SO libspdk_notify.so.6.0 00:04:15.907 LIB libspdk_keyring.a 00:04:15.907 SYMLINK libspdk_notify.so 00:04:16.167 LIB libspdk_trace.a 00:04:16.167 SO libspdk_keyring.so.2.0 00:04:16.167 SO libspdk_trace.so.11.0 00:04:16.167 SYMLINK libspdk_keyring.so 00:04:16.167 SYMLINK libspdk_trace.so 00:04:16.735 CC lib/thread/thread.o 00:04:16.735 CC lib/thread/iobuf.o 00:04:16.735 CC lib/sock/sock.o 00:04:16.735 CC lib/sock/sock_rpc.o 00:04:16.995 LIB libspdk_sock.a 00:04:16.995 SO libspdk_sock.so.10.0 00:04:17.255 SYMLINK libspdk_sock.so 00:04:17.513 CC lib/nvme/nvme_ctrlr.o 00:04:17.513 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:17.513 CC lib/nvme/nvme_fabric.o 00:04:17.513 CC lib/nvme/nvme_ns_cmd.o 00:04:17.513 CC lib/nvme/nvme_pcie.o 00:04:17.513 CC lib/nvme/nvme_ns.o 00:04:17.513 CC lib/nvme/nvme_pcie_common.o 00:04:17.513 CC lib/nvme/nvme.o 00:04:17.513 CC lib/nvme/nvme_qpair.o 00:04:18.080 CC lib/nvme/nvme_quirks.o 00:04:18.080 LIB libspdk_thread.a 00:04:18.338 CC lib/nvme/nvme_transport.o 00:04:18.338 SO libspdk_thread.so.10.2 00:04:18.338 CC lib/nvme/nvme_discovery.o 00:04:18.338 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:18.338 SYMLINK libspdk_thread.so 00:04:18.338 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:18.596 CC lib/accel/accel.o 00:04:18.596 CC lib/nvme/nvme_tcp.o 00:04:18.596 CC lib/blob/blobstore.o 00:04:18.596 CC lib/blob/request.o 00:04:18.854 CC lib/nvme/nvme_opal.o 00:04:18.854 CC lib/accel/accel_rpc.o 00:04:18.854 CC lib/accel/accel_sw.o 00:04:19.112 CC lib/blob/zeroes.o 00:04:19.112 CC lib/blob/blob_bs_dev.o 00:04:19.112 CC lib/init/json_config.o 00:04:19.112 CC lib/virtio/virtio.o 00:04:19.112 CC lib/init/subsystem.o 00:04:19.371 CC lib/virtio/virtio_vhost_user.o 00:04:19.371 CC lib/nvme/nvme_io_msg.o 00:04:19.371 CC lib/fsdev/fsdev.o 00:04:19.371 CC lib/init/subsystem_rpc.o 00:04:19.371 CC lib/nvme/nvme_poll_group.o 00:04:19.629 CC lib/init/rpc.o 00:04:19.629 CC lib/fsdev/fsdev_io.o 00:04:19.629 CC lib/virtio/virtio_vfio_user.o 00:04:19.629 LIB libspdk_accel.a 00:04:19.629 LIB libspdk_init.a 00:04:19.887 SO libspdk_accel.so.16.0 00:04:19.887 SO libspdk_init.so.6.0 00:04:19.887 SYMLINK libspdk_init.so 00:04:19.887 SYMLINK libspdk_accel.so 00:04:19.887 CC lib/fsdev/fsdev_rpc.o 00:04:19.887 CC lib/nvme/nvme_zns.o 00:04:19.887 CC lib/virtio/virtio_pci.o 00:04:19.887 CC lib/nvme/nvme_stubs.o 00:04:19.887 CC lib/event/app.o 00:04:19.887 CC lib/event/reactor.o 00:04:20.145 CC lib/event/log_rpc.o 00:04:20.145 CC lib/bdev/bdev.o 00:04:20.145 LIB libspdk_fsdev.a 00:04:20.145 SO libspdk_fsdev.so.1.0 00:04:20.145 CC lib/nvme/nvme_auth.o 00:04:20.145 CC lib/nvme/nvme_cuse.o 00:04:20.145 LIB libspdk_virtio.a 00:04:20.145 SYMLINK libspdk_fsdev.so 00:04:20.145 CC lib/nvme/nvme_rdma.o 00:04:20.403 SO libspdk_virtio.so.7.0 00:04:20.403 SYMLINK libspdk_virtio.so 00:04:20.403 CC lib/bdev/bdev_rpc.o 00:04:20.403 CC lib/bdev/bdev_zone.o 00:04:20.403 CC lib/bdev/part.o 00:04:20.403 CC lib/event/app_rpc.o 00:04:20.403 CC lib/bdev/scsi_nvme.o 00:04:20.662 CC lib/event/scheduler_static.o 00:04:20.662 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:20.921 LIB libspdk_event.a 00:04:20.921 SO libspdk_event.so.14.0 00:04:20.921 SYMLINK libspdk_event.so 00:04:21.490 LIB libspdk_fuse_dispatcher.a 00:04:21.490 SO libspdk_fuse_dispatcher.so.1.0 00:04:21.490 SYMLINK libspdk_fuse_dispatcher.so 00:04:21.490 LIB libspdk_nvme.a 00:04:21.750 SO libspdk_nvme.so.14.0 00:04:22.010 SYMLINK libspdk_nvme.so 00:04:22.010 LIB libspdk_blob.a 00:04:22.270 SO libspdk_blob.so.11.0 00:04:22.270 SYMLINK libspdk_blob.so 00:04:22.530 CC lib/blobfs/tree.o 00:04:22.530 CC lib/blobfs/blobfs.o 00:04:22.530 CC lib/lvol/lvol.o 00:04:22.789 LIB libspdk_bdev.a 00:04:22.789 SO libspdk_bdev.so.17.0 00:04:23.049 SYMLINK libspdk_bdev.so 00:04:23.308 CC lib/scsi/port.o 00:04:23.308 CC lib/scsi/lun.o 00:04:23.308 CC lib/scsi/dev.o 00:04:23.308 CC lib/scsi/scsi.o 00:04:23.308 CC lib/nbd/nbd.o 00:04:23.308 CC lib/ublk/ublk.o 00:04:23.308 CC lib/nvmf/ctrlr.o 00:04:23.308 CC lib/ftl/ftl_core.o 00:04:23.308 CC lib/ftl/ftl_init.o 00:04:23.308 CC lib/ftl/ftl_layout.o 00:04:23.566 CC lib/ublk/ublk_rpc.o 00:04:23.566 LIB libspdk_blobfs.a 00:04:23.566 SO libspdk_blobfs.so.10.0 00:04:23.566 CC lib/scsi/scsi_bdev.o 00:04:23.566 CC lib/scsi/scsi_pr.o 00:04:23.566 SYMLINK libspdk_blobfs.so 00:04:23.566 CC lib/scsi/scsi_rpc.o 00:04:23.566 CC lib/ftl/ftl_debug.o 00:04:23.566 LIB libspdk_lvol.a 00:04:23.566 CC lib/nbd/nbd_rpc.o 00:04:23.566 SO libspdk_lvol.so.10.0 00:04:23.566 CC lib/scsi/task.o 00:04:23.825 SYMLINK libspdk_lvol.so 00:04:23.825 CC lib/ftl/ftl_io.o 00:04:23.825 CC lib/ftl/ftl_sb.o 00:04:23.825 CC lib/nvmf/ctrlr_discovery.o 00:04:23.825 LIB libspdk_nbd.a 00:04:23.825 CC lib/ftl/ftl_l2p.o 00:04:23.825 SO libspdk_nbd.so.7.0 00:04:23.825 CC lib/ftl/ftl_l2p_flat.o 00:04:23.825 SYMLINK libspdk_nbd.so 00:04:23.825 CC lib/ftl/ftl_nv_cache.o 00:04:23.825 CC lib/ftl/ftl_band.o 00:04:23.825 CC lib/ftl/ftl_band_ops.o 00:04:23.825 LIB libspdk_ublk.a 00:04:23.825 SO libspdk_ublk.so.3.0 00:04:24.083 CC lib/ftl/ftl_writer.o 00:04:24.083 SYMLINK libspdk_ublk.so 00:04:24.083 CC lib/nvmf/ctrlr_bdev.o 00:04:24.083 CC lib/nvmf/subsystem.o 00:04:24.083 LIB libspdk_scsi.a 00:04:24.083 CC lib/ftl/ftl_rq.o 00:04:24.083 SO libspdk_scsi.so.9.0 00:04:24.083 SYMLINK libspdk_scsi.so 00:04:24.083 CC lib/ftl/ftl_reloc.o 00:04:24.341 CC lib/ftl/ftl_l2p_cache.o 00:04:24.341 CC lib/ftl/ftl_p2l.o 00:04:24.341 CC lib/ftl/ftl_p2l_log.o 00:04:24.341 CC lib/iscsi/conn.o 00:04:24.341 CC lib/vhost/vhost.o 00:04:24.603 CC lib/ftl/mngt/ftl_mngt.o 00:04:24.603 CC lib/iscsi/init_grp.o 00:04:24.603 CC lib/iscsi/iscsi.o 00:04:24.865 CC lib/iscsi/param.o 00:04:24.865 CC lib/iscsi/portal_grp.o 00:04:24.865 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:24.865 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:24.865 CC lib/vhost/vhost_rpc.o 00:04:24.865 CC lib/iscsi/tgt_node.o 00:04:24.865 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:25.123 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:25.123 CC lib/iscsi/iscsi_subsystem.o 00:04:25.123 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:25.123 CC lib/iscsi/iscsi_rpc.o 00:04:25.123 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:25.123 CC lib/iscsi/task.o 00:04:25.380 CC lib/nvmf/nvmf.o 00:04:25.380 CC lib/nvmf/nvmf_rpc.o 00:04:25.380 CC lib/vhost/vhost_scsi.o 00:04:25.380 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:25.380 CC lib/nvmf/transport.o 00:04:25.380 CC lib/vhost/vhost_blk.o 00:04:25.637 CC lib/vhost/rte_vhost_user.o 00:04:25.637 CC lib/nvmf/tcp.o 00:04:25.637 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:25.637 CC lib/nvmf/stubs.o 00:04:25.895 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:25.895 CC lib/nvmf/mdns_server.o 00:04:26.153 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:26.153 CC lib/nvmf/rdma.o 00:04:26.153 LIB libspdk_iscsi.a 00:04:26.153 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:26.153 CC lib/nvmf/auth.o 00:04:26.153 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:26.153 SO libspdk_iscsi.so.8.0 00:04:26.153 CC lib/ftl/utils/ftl_conf.o 00:04:26.153 CC lib/ftl/utils/ftl_md.o 00:04:26.411 CC lib/ftl/utils/ftl_mempool.o 00:04:26.411 CC lib/ftl/utils/ftl_bitmap.o 00:04:26.411 SYMLINK libspdk_iscsi.so 00:04:26.411 CC lib/ftl/utils/ftl_property.o 00:04:26.411 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:26.411 LIB libspdk_vhost.a 00:04:26.670 SO libspdk_vhost.so.8.0 00:04:26.670 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:26.670 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:26.670 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:26.670 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:26.670 SYMLINK libspdk_vhost.so 00:04:26.670 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:26.670 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:26.670 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:26.670 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:26.670 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:26.928 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:26.928 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:26.928 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:26.928 CC lib/ftl/base/ftl_base_dev.o 00:04:26.928 CC lib/ftl/base/ftl_base_bdev.o 00:04:26.928 CC lib/ftl/ftl_trace.o 00:04:27.188 LIB libspdk_ftl.a 00:04:27.448 SO libspdk_ftl.so.9.0 00:04:27.707 SYMLINK libspdk_ftl.so 00:04:28.647 LIB libspdk_nvmf.a 00:04:28.647 SO libspdk_nvmf.so.19.0 00:04:28.906 SYMLINK libspdk_nvmf.so 00:04:29.164 CC module/env_dpdk/env_dpdk_rpc.o 00:04:29.423 CC module/fsdev/aio/fsdev_aio.o 00:04:29.423 CC module/sock/posix/posix.o 00:04:29.423 CC module/accel/ioat/accel_ioat.o 00:04:29.423 CC module/accel/dsa/accel_dsa.o 00:04:29.423 CC module/blob/bdev/blob_bdev.o 00:04:29.423 CC module/keyring/file/keyring.o 00:04:29.423 CC module/accel/error/accel_error.o 00:04:29.423 CC module/keyring/linux/keyring.o 00:04:29.423 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:29.423 LIB libspdk_env_dpdk_rpc.a 00:04:29.423 SO libspdk_env_dpdk_rpc.so.6.0 00:04:29.423 SYMLINK libspdk_env_dpdk_rpc.so 00:04:29.423 CC module/keyring/file/keyring_rpc.o 00:04:29.423 CC module/accel/dsa/accel_dsa_rpc.o 00:04:29.423 CC module/keyring/linux/keyring_rpc.o 00:04:29.423 CC module/accel/ioat/accel_ioat_rpc.o 00:04:29.423 CC module/accel/error/accel_error_rpc.o 00:04:29.423 LIB libspdk_scheduler_dynamic.a 00:04:29.423 SO libspdk_scheduler_dynamic.so.4.0 00:04:29.681 LIB libspdk_keyring_file.a 00:04:29.681 LIB libspdk_keyring_linux.a 00:04:29.681 SYMLINK libspdk_scheduler_dynamic.so 00:04:29.681 LIB libspdk_blob_bdev.a 00:04:29.681 LIB libspdk_accel_dsa.a 00:04:29.681 SO libspdk_keyring_file.so.2.0 00:04:29.681 SO libspdk_keyring_linux.so.1.0 00:04:29.681 SO libspdk_blob_bdev.so.11.0 00:04:29.681 LIB libspdk_accel_ioat.a 00:04:29.681 SO libspdk_accel_dsa.so.5.0 00:04:29.681 LIB libspdk_accel_error.a 00:04:29.681 SO libspdk_accel_ioat.so.6.0 00:04:29.681 SO libspdk_accel_error.so.2.0 00:04:29.681 SYMLINK libspdk_keyring_file.so 00:04:29.681 SYMLINK libspdk_keyring_linux.so 00:04:29.681 SYMLINK libspdk_blob_bdev.so 00:04:29.681 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:29.681 SYMLINK libspdk_accel_dsa.so 00:04:29.681 CC module/fsdev/aio/linux_aio_mgr.o 00:04:29.681 SYMLINK libspdk_accel_error.so 00:04:29.681 SYMLINK libspdk_accel_ioat.so 00:04:29.681 CC module/accel/iaa/accel_iaa.o 00:04:29.681 CC module/accel/iaa/accel_iaa_rpc.o 00:04:29.681 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:29.939 CC module/scheduler/gscheduler/gscheduler.o 00:04:29.940 LIB libspdk_scheduler_dpdk_governor.a 00:04:29.940 LIB libspdk_accel_iaa.a 00:04:29.940 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:29.940 CC module/bdev/delay/vbdev_delay.o 00:04:29.940 CC module/blobfs/bdev/blobfs_bdev.o 00:04:29.940 SO libspdk_accel_iaa.so.3.0 00:04:29.940 LIB libspdk_scheduler_gscheduler.a 00:04:29.940 CC module/bdev/error/vbdev_error.o 00:04:29.940 SO libspdk_scheduler_gscheduler.so.4.0 00:04:29.940 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:29.940 CC module/bdev/error/vbdev_error_rpc.o 00:04:29.940 CC module/bdev/gpt/gpt.o 00:04:29.940 LIB libspdk_fsdev_aio.a 00:04:29.940 SYMLINK libspdk_accel_iaa.so 00:04:29.940 CC module/bdev/lvol/vbdev_lvol.o 00:04:29.940 CC module/bdev/gpt/vbdev_gpt.o 00:04:29.940 SO libspdk_fsdev_aio.so.1.0 00:04:30.198 SYMLINK libspdk_scheduler_gscheduler.so 00:04:30.198 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:30.198 LIB libspdk_sock_posix.a 00:04:30.198 SO libspdk_sock_posix.so.6.0 00:04:30.198 SYMLINK libspdk_fsdev_aio.so 00:04:30.198 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:30.198 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:30.198 SYMLINK libspdk_sock_posix.so 00:04:30.198 LIB libspdk_bdev_error.a 00:04:30.198 SO libspdk_bdev_error.so.6.0 00:04:30.198 LIB libspdk_bdev_gpt.a 00:04:30.198 LIB libspdk_blobfs_bdev.a 00:04:30.457 CC module/bdev/malloc/bdev_malloc.o 00:04:30.457 CC module/bdev/null/bdev_null.o 00:04:30.457 SO libspdk_bdev_gpt.so.6.0 00:04:30.457 CC module/bdev/nvme/bdev_nvme.o 00:04:30.457 SO libspdk_blobfs_bdev.so.6.0 00:04:30.457 LIB libspdk_bdev_delay.a 00:04:30.457 SYMLINK libspdk_bdev_error.so 00:04:30.457 CC module/bdev/passthru/vbdev_passthru.o 00:04:30.457 CC module/bdev/null/bdev_null_rpc.o 00:04:30.457 SO libspdk_bdev_delay.so.6.0 00:04:30.457 SYMLINK libspdk_blobfs_bdev.so 00:04:30.457 SYMLINK libspdk_bdev_gpt.so 00:04:30.457 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:30.457 SYMLINK libspdk_bdev_delay.so 00:04:30.457 CC module/bdev/raid/bdev_raid.o 00:04:30.457 LIB libspdk_bdev_lvol.a 00:04:30.457 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:30.716 CC module/bdev/split/vbdev_split.o 00:04:30.716 SO libspdk_bdev_lvol.so.6.0 00:04:30.716 LIB libspdk_bdev_null.a 00:04:30.716 SO libspdk_bdev_null.so.6.0 00:04:30.716 SYMLINK libspdk_bdev_lvol.so 00:04:30.716 CC module/bdev/aio/bdev_aio.o 00:04:30.716 LIB libspdk_bdev_passthru.a 00:04:30.716 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:30.716 SO libspdk_bdev_passthru.so.6.0 00:04:30.716 SYMLINK libspdk_bdev_null.so 00:04:30.716 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:30.716 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:30.716 SYMLINK libspdk_bdev_passthru.so 00:04:30.716 CC module/bdev/split/vbdev_split_rpc.o 00:04:30.716 CC module/bdev/ftl/bdev_ftl.o 00:04:30.975 LIB libspdk_bdev_malloc.a 00:04:30.975 CC module/bdev/iscsi/bdev_iscsi.o 00:04:30.975 SO libspdk_bdev_malloc.so.6.0 00:04:30.975 LIB libspdk_bdev_split.a 00:04:30.975 SYMLINK libspdk_bdev_malloc.so 00:04:30.975 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:30.975 SO libspdk_bdev_split.so.6.0 00:04:30.975 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:30.975 LIB libspdk_bdev_zone_block.a 00:04:30.975 CC module/bdev/aio/bdev_aio_rpc.o 00:04:30.975 SYMLINK libspdk_bdev_split.so 00:04:30.975 SO libspdk_bdev_zone_block.so.6.0 00:04:30.975 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:31.234 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:31.234 SYMLINK libspdk_bdev_zone_block.so 00:04:31.234 CC module/bdev/nvme/nvme_rpc.o 00:04:31.234 CC module/bdev/nvme/bdev_mdns_client.o 00:04:31.234 LIB libspdk_bdev_aio.a 00:04:31.234 SO libspdk_bdev_aio.so.6.0 00:04:31.234 CC module/bdev/nvme/vbdev_opal.o 00:04:31.234 SYMLINK libspdk_bdev_aio.so 00:04:31.234 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:31.234 LIB libspdk_bdev_iscsi.a 00:04:31.234 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:31.234 LIB libspdk_bdev_ftl.a 00:04:31.493 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:31.493 SO libspdk_bdev_iscsi.so.6.0 00:04:31.493 SO libspdk_bdev_ftl.so.6.0 00:04:31.493 CC module/bdev/raid/bdev_raid_rpc.o 00:04:31.493 SYMLINK libspdk_bdev_iscsi.so 00:04:31.493 SYMLINK libspdk_bdev_ftl.so 00:04:31.493 CC module/bdev/raid/bdev_raid_sb.o 00:04:31.493 CC module/bdev/raid/raid0.o 00:04:31.493 CC module/bdev/raid/raid1.o 00:04:31.493 CC module/bdev/raid/concat.o 00:04:31.493 CC module/bdev/raid/raid5f.o 00:04:31.752 LIB libspdk_bdev_virtio.a 00:04:31.752 SO libspdk_bdev_virtio.so.6.0 00:04:31.752 SYMLINK libspdk_bdev_virtio.so 00:04:32.012 LIB libspdk_bdev_raid.a 00:04:32.272 SO libspdk_bdev_raid.so.6.0 00:04:32.272 SYMLINK libspdk_bdev_raid.so 00:04:32.842 LIB libspdk_bdev_nvme.a 00:04:32.842 SO libspdk_bdev_nvme.so.7.0 00:04:33.101 SYMLINK libspdk_bdev_nvme.so 00:04:33.670 CC module/event/subsystems/vmd/vmd.o 00:04:33.670 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:33.670 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:33.670 CC module/event/subsystems/fsdev/fsdev.o 00:04:33.670 CC module/event/subsystems/keyring/keyring.o 00:04:33.670 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:33.670 CC module/event/subsystems/iobuf/iobuf.o 00:04:33.670 CC module/event/subsystems/scheduler/scheduler.o 00:04:33.670 CC module/event/subsystems/sock/sock.o 00:04:33.670 LIB libspdk_event_keyring.a 00:04:33.670 LIB libspdk_event_fsdev.a 00:04:33.670 LIB libspdk_event_vmd.a 00:04:33.670 LIB libspdk_event_scheduler.a 00:04:33.670 LIB libspdk_event_sock.a 00:04:33.670 LIB libspdk_event_vhost_blk.a 00:04:33.670 SO libspdk_event_keyring.so.1.0 00:04:33.670 SO libspdk_event_fsdev.so.1.0 00:04:33.670 LIB libspdk_event_iobuf.a 00:04:33.670 SO libspdk_event_vmd.so.6.0 00:04:33.929 SO libspdk_event_scheduler.so.4.0 00:04:33.929 SO libspdk_event_sock.so.5.0 00:04:33.929 SO libspdk_event_vhost_blk.so.3.0 00:04:33.929 SO libspdk_event_iobuf.so.3.0 00:04:33.929 SYMLINK libspdk_event_keyring.so 00:04:33.929 SYMLINK libspdk_event_fsdev.so 00:04:33.929 SYMLINK libspdk_event_scheduler.so 00:04:33.929 SYMLINK libspdk_event_vmd.so 00:04:33.929 SYMLINK libspdk_event_sock.so 00:04:33.929 SYMLINK libspdk_event_vhost_blk.so 00:04:33.929 SYMLINK libspdk_event_iobuf.so 00:04:34.189 CC module/event/subsystems/accel/accel.o 00:04:34.449 LIB libspdk_event_accel.a 00:04:34.449 SO libspdk_event_accel.so.6.0 00:04:34.449 SYMLINK libspdk_event_accel.so 00:04:35.018 CC module/event/subsystems/bdev/bdev.o 00:04:35.018 LIB libspdk_event_bdev.a 00:04:35.018 SO libspdk_event_bdev.so.6.0 00:04:35.278 SYMLINK libspdk_event_bdev.so 00:04:35.537 CC module/event/subsystems/scsi/scsi.o 00:04:35.537 CC module/event/subsystems/ublk/ublk.o 00:04:35.537 CC module/event/subsystems/nbd/nbd.o 00:04:35.537 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:35.537 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:35.537 LIB libspdk_event_ublk.a 00:04:35.537 LIB libspdk_event_nbd.a 00:04:35.537 LIB libspdk_event_scsi.a 00:04:35.796 SO libspdk_event_scsi.so.6.0 00:04:35.796 SO libspdk_event_ublk.so.3.0 00:04:35.796 SO libspdk_event_nbd.so.6.0 00:04:35.796 SYMLINK libspdk_event_scsi.so 00:04:35.796 SYMLINK libspdk_event_ublk.so 00:04:35.796 SYMLINK libspdk_event_nbd.so 00:04:35.796 LIB libspdk_event_nvmf.a 00:04:35.796 SO libspdk_event_nvmf.so.6.0 00:04:35.796 SYMLINK libspdk_event_nvmf.so 00:04:36.055 CC module/event/subsystems/iscsi/iscsi.o 00:04:36.055 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:36.314 LIB libspdk_event_iscsi.a 00:04:36.314 LIB libspdk_event_vhost_scsi.a 00:04:36.314 SO libspdk_event_vhost_scsi.so.3.0 00:04:36.314 SO libspdk_event_iscsi.so.6.0 00:04:36.314 SYMLINK libspdk_event_vhost_scsi.so 00:04:36.314 SYMLINK libspdk_event_iscsi.so 00:04:36.573 SO libspdk.so.6.0 00:04:36.573 SYMLINK libspdk.so 00:04:36.832 CC app/trace_record/trace_record.o 00:04:36.832 CC app/spdk_lspci/spdk_lspci.o 00:04:36.832 CXX app/trace/trace.o 00:04:36.832 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:36.832 CC app/iscsi_tgt/iscsi_tgt.o 00:04:36.832 CC app/nvmf_tgt/nvmf_main.o 00:04:36.832 CC examples/util/zipf/zipf.o 00:04:36.832 CC examples/ioat/perf/perf.o 00:04:36.832 CC test/thread/poller_perf/poller_perf.o 00:04:36.832 CC app/spdk_tgt/spdk_tgt.o 00:04:37.091 LINK spdk_lspci 00:04:37.091 LINK zipf 00:04:37.091 LINK nvmf_tgt 00:04:37.091 LINK interrupt_tgt 00:04:37.091 LINK poller_perf 00:04:37.091 LINK iscsi_tgt 00:04:37.091 LINK spdk_trace_record 00:04:37.091 LINK spdk_tgt 00:04:37.350 LINK ioat_perf 00:04:37.350 LINK spdk_trace 00:04:37.350 CC app/spdk_nvme_identify/identify.o 00:04:37.350 CC app/spdk_nvme_perf/perf.o 00:04:37.350 CC app/spdk_nvme_discover/discovery_aer.o 00:04:37.350 CC examples/ioat/verify/verify.o 00:04:37.350 CC app/spdk_top/spdk_top.o 00:04:37.350 CC test/dma/test_dma/test_dma.o 00:04:37.350 CC examples/thread/thread/thread_ex.o 00:04:37.350 CC app/spdk_dd/spdk_dd.o 00:04:37.610 CC examples/sock/hello_world/hello_sock.o 00:04:37.610 LINK spdk_nvme_discover 00:04:37.610 LINK verify 00:04:37.610 CC app/fio/nvme/fio_plugin.o 00:04:37.869 LINK thread 00:04:37.869 LINK hello_sock 00:04:37.869 LINK spdk_dd 00:04:37.869 CC examples/vmd/lsvmd/lsvmd.o 00:04:37.869 CC test/app/bdev_svc/bdev_svc.o 00:04:37.869 LINK test_dma 00:04:38.128 LINK lsvmd 00:04:38.128 CC app/fio/bdev/fio_plugin.o 00:04:38.128 LINK bdev_svc 00:04:38.128 CC examples/idxd/perf/perf.o 00:04:38.388 LINK spdk_nvme 00:04:38.388 LINK spdk_nvme_perf 00:04:38.388 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:38.388 LINK spdk_nvme_identify 00:04:38.388 CC examples/vmd/led/led.o 00:04:38.388 CC examples/accel/perf/accel_perf.o 00:04:38.388 CC test/app/histogram_perf/histogram_perf.o 00:04:38.388 LINK spdk_top 00:04:38.388 LINK led 00:04:38.388 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:38.388 LINK idxd_perf 00:04:38.647 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:38.647 LINK hello_fsdev 00:04:38.647 LINK histogram_perf 00:04:38.647 CC app/vhost/vhost.o 00:04:38.647 LINK spdk_bdev 00:04:38.647 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:38.647 TEST_HEADER include/spdk/accel.h 00:04:38.647 TEST_HEADER include/spdk/accel_module.h 00:04:38.647 TEST_HEADER include/spdk/assert.h 00:04:38.647 TEST_HEADER include/spdk/barrier.h 00:04:38.647 TEST_HEADER include/spdk/base64.h 00:04:38.647 TEST_HEADER include/spdk/bdev.h 00:04:38.647 TEST_HEADER include/spdk/bdev_module.h 00:04:38.647 TEST_HEADER include/spdk/bdev_zone.h 00:04:38.647 TEST_HEADER include/spdk/bit_array.h 00:04:38.647 TEST_HEADER include/spdk/bit_pool.h 00:04:38.647 TEST_HEADER include/spdk/blob_bdev.h 00:04:38.647 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:38.647 TEST_HEADER include/spdk/blobfs.h 00:04:38.647 TEST_HEADER include/spdk/blob.h 00:04:38.647 TEST_HEADER include/spdk/conf.h 00:04:38.908 TEST_HEADER include/spdk/config.h 00:04:38.908 TEST_HEADER include/spdk/cpuset.h 00:04:38.908 TEST_HEADER include/spdk/crc16.h 00:04:38.908 TEST_HEADER include/spdk/crc32.h 00:04:38.908 TEST_HEADER include/spdk/crc64.h 00:04:38.908 TEST_HEADER include/spdk/dif.h 00:04:38.908 TEST_HEADER include/spdk/dma.h 00:04:38.908 TEST_HEADER include/spdk/endian.h 00:04:38.908 TEST_HEADER include/spdk/env_dpdk.h 00:04:38.908 TEST_HEADER include/spdk/env.h 00:04:38.908 TEST_HEADER include/spdk/event.h 00:04:38.908 LINK vhost 00:04:38.908 TEST_HEADER include/spdk/fd_group.h 00:04:38.908 TEST_HEADER include/spdk/fd.h 00:04:38.908 TEST_HEADER include/spdk/file.h 00:04:38.908 TEST_HEADER include/spdk/fsdev.h 00:04:38.908 TEST_HEADER include/spdk/fsdev_module.h 00:04:38.908 TEST_HEADER include/spdk/ftl.h 00:04:38.908 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:38.908 TEST_HEADER include/spdk/gpt_spec.h 00:04:38.908 TEST_HEADER include/spdk/hexlify.h 00:04:38.908 TEST_HEADER include/spdk/histogram_data.h 00:04:38.908 TEST_HEADER include/spdk/idxd.h 00:04:38.908 TEST_HEADER include/spdk/idxd_spec.h 00:04:38.908 CC examples/blob/hello_world/hello_blob.o 00:04:38.908 TEST_HEADER include/spdk/init.h 00:04:38.908 CC test/app/jsoncat/jsoncat.o 00:04:38.908 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:38.908 TEST_HEADER include/spdk/ioat.h 00:04:38.908 TEST_HEADER include/spdk/ioat_spec.h 00:04:38.908 TEST_HEADER include/spdk/iscsi_spec.h 00:04:38.908 TEST_HEADER include/spdk/json.h 00:04:38.908 TEST_HEADER include/spdk/jsonrpc.h 00:04:38.908 TEST_HEADER include/spdk/keyring.h 00:04:38.908 TEST_HEADER include/spdk/keyring_module.h 00:04:38.908 TEST_HEADER include/spdk/likely.h 00:04:38.908 TEST_HEADER include/spdk/log.h 00:04:38.908 CC examples/blob/cli/blobcli.o 00:04:38.908 TEST_HEADER include/spdk/lvol.h 00:04:38.908 TEST_HEADER include/spdk/md5.h 00:04:38.908 TEST_HEADER include/spdk/memory.h 00:04:38.908 TEST_HEADER include/spdk/mmio.h 00:04:38.908 CC test/app/stub/stub.o 00:04:38.908 TEST_HEADER include/spdk/nbd.h 00:04:38.908 TEST_HEADER include/spdk/net.h 00:04:38.908 TEST_HEADER include/spdk/notify.h 00:04:38.908 TEST_HEADER include/spdk/nvme.h 00:04:38.908 TEST_HEADER include/spdk/nvme_intel.h 00:04:38.908 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:38.908 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:38.908 TEST_HEADER include/spdk/nvme_spec.h 00:04:38.908 TEST_HEADER include/spdk/nvme_zns.h 00:04:38.908 LINK accel_perf 00:04:38.908 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:38.908 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:38.908 TEST_HEADER include/spdk/nvmf.h 00:04:38.908 LINK nvme_fuzz 00:04:38.908 TEST_HEADER include/spdk/nvmf_spec.h 00:04:38.908 TEST_HEADER include/spdk/nvmf_transport.h 00:04:38.908 TEST_HEADER include/spdk/opal.h 00:04:38.908 TEST_HEADER include/spdk/opal_spec.h 00:04:38.908 TEST_HEADER include/spdk/pci_ids.h 00:04:38.908 TEST_HEADER include/spdk/pipe.h 00:04:38.908 TEST_HEADER include/spdk/queue.h 00:04:38.908 TEST_HEADER include/spdk/reduce.h 00:04:38.908 TEST_HEADER include/spdk/rpc.h 00:04:38.908 TEST_HEADER include/spdk/scheduler.h 00:04:38.908 TEST_HEADER include/spdk/scsi.h 00:04:38.908 TEST_HEADER include/spdk/scsi_spec.h 00:04:38.908 TEST_HEADER include/spdk/sock.h 00:04:38.908 TEST_HEADER include/spdk/stdinc.h 00:04:38.908 TEST_HEADER include/spdk/string.h 00:04:38.908 TEST_HEADER include/spdk/thread.h 00:04:38.908 TEST_HEADER include/spdk/trace.h 00:04:38.908 TEST_HEADER include/spdk/trace_parser.h 00:04:38.908 TEST_HEADER include/spdk/tree.h 00:04:38.908 TEST_HEADER include/spdk/ublk.h 00:04:38.908 TEST_HEADER include/spdk/util.h 00:04:38.908 LINK jsoncat 00:04:38.908 TEST_HEADER include/spdk/uuid.h 00:04:38.908 TEST_HEADER include/spdk/version.h 00:04:38.908 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:38.908 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:38.908 TEST_HEADER include/spdk/vhost.h 00:04:38.908 TEST_HEADER include/spdk/vmd.h 00:04:38.908 TEST_HEADER include/spdk/xor.h 00:04:38.908 TEST_HEADER include/spdk/zipf.h 00:04:38.908 CXX test/cpp_headers/accel.o 00:04:39.167 LINK hello_blob 00:04:39.167 LINK stub 00:04:39.167 CXX test/cpp_headers/accel_module.o 00:04:39.167 CXX test/cpp_headers/assert.o 00:04:39.167 CC test/env/vtophys/vtophys.o 00:04:39.167 CXX test/cpp_headers/barrier.o 00:04:39.167 CC test/env/mem_callbacks/mem_callbacks.o 00:04:39.426 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:39.426 LINK vhost_fuzz 00:04:39.426 CC test/rpc_client/rpc_client_test.o 00:04:39.426 LINK vtophys 00:04:39.426 LINK blobcli 00:04:39.426 CC test/event/event_perf/event_perf.o 00:04:39.426 CXX test/cpp_headers/base64.o 00:04:39.426 CC test/nvme/aer/aer.o 00:04:39.426 LINK env_dpdk_post_init 00:04:39.426 LINK mem_callbacks 00:04:39.686 CC test/nvme/reset/reset.o 00:04:39.686 CXX test/cpp_headers/bdev.o 00:04:39.686 LINK event_perf 00:04:39.686 LINK rpc_client_test 00:04:39.686 CXX test/cpp_headers/bdev_module.o 00:04:39.686 CC test/nvme/sgl/sgl.o 00:04:39.686 CC test/env/memory/memory_ut.o 00:04:39.686 CXX test/cpp_headers/bdev_zone.o 00:04:39.686 LINK aer 00:04:39.686 CC examples/nvme/hello_world/hello_world.o 00:04:39.945 LINK reset 00:04:39.945 CC test/event/reactor/reactor.o 00:04:39.945 CC test/event/reactor_perf/reactor_perf.o 00:04:39.945 CC test/env/pci/pci_ut.o 00:04:39.945 CXX test/cpp_headers/bit_array.o 00:04:39.945 LINK sgl 00:04:39.945 LINK reactor 00:04:39.945 LINK reactor_perf 00:04:39.945 CC examples/nvme/reconnect/reconnect.o 00:04:39.945 LINK hello_world 00:04:40.205 CXX test/cpp_headers/bit_pool.o 00:04:40.205 CC test/event/app_repeat/app_repeat.o 00:04:40.205 CC test/nvme/e2edp/nvme_dp.o 00:04:40.205 CXX test/cpp_headers/blob_bdev.o 00:04:40.205 CC test/nvme/overhead/overhead.o 00:04:40.205 LINK app_repeat 00:04:40.205 LINK pci_ut 00:04:40.464 CC test/accel/dif/dif.o 00:04:40.464 CC examples/bdev/hello_world/hello_bdev.o 00:04:40.464 LINK reconnect 00:04:40.464 CXX test/cpp_headers/blobfs_bdev.o 00:04:40.464 LINK nvme_dp 00:04:40.464 LINK iscsi_fuzz 00:04:40.464 LINK memory_ut 00:04:40.464 LINK overhead 00:04:40.464 CC test/event/scheduler/scheduler.o 00:04:40.723 CXX test/cpp_headers/blobfs.o 00:04:40.723 LINK hello_bdev 00:04:40.723 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:40.723 CC examples/nvme/arbitration/arbitration.o 00:04:40.723 CC test/nvme/err_injection/err_injection.o 00:04:40.723 CXX test/cpp_headers/blob.o 00:04:40.723 LINK scheduler 00:04:40.723 CC examples/nvme/hotplug/hotplug.o 00:04:40.985 LINK err_injection 00:04:40.985 CXX test/cpp_headers/conf.o 00:04:40.985 CC examples/bdev/bdevperf/bdevperf.o 00:04:40.985 CC test/blobfs/mkfs/mkfs.o 00:04:40.985 CC test/lvol/esnap/esnap.o 00:04:40.985 LINK arbitration 00:04:40.985 CXX test/cpp_headers/config.o 00:04:40.985 CXX test/cpp_headers/cpuset.o 00:04:40.985 CXX test/cpp_headers/crc16.o 00:04:40.985 LINK hotplug 00:04:41.257 LINK mkfs 00:04:41.257 LINK dif 00:04:41.257 CC test/nvme/startup/startup.o 00:04:41.257 LINK nvme_manage 00:04:41.257 CXX test/cpp_headers/crc32.o 00:04:41.257 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:41.257 CC test/nvme/reserve/reserve.o 00:04:41.257 CC test/nvme/simple_copy/simple_copy.o 00:04:41.257 LINK startup 00:04:41.531 CC test/nvme/connect_stress/connect_stress.o 00:04:41.531 CXX test/cpp_headers/crc64.o 00:04:41.531 CC test/nvme/boot_partition/boot_partition.o 00:04:41.531 LINK cmb_copy 00:04:41.531 CC test/nvme/compliance/nvme_compliance.o 00:04:41.531 CXX test/cpp_headers/dif.o 00:04:41.531 LINK reserve 00:04:41.531 LINK simple_copy 00:04:41.531 LINK boot_partition 00:04:41.531 LINK connect_stress 00:04:41.531 CXX test/cpp_headers/dma.o 00:04:41.790 CC test/nvme/fused_ordering/fused_ordering.o 00:04:41.790 CC examples/nvme/abort/abort.o 00:04:41.790 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:41.790 CXX test/cpp_headers/endian.o 00:04:41.790 CC test/nvme/fdp/fdp.o 00:04:41.790 CC test/nvme/cuse/cuse.o 00:04:41.790 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:41.790 LINK bdevperf 00:04:41.790 LINK nvme_compliance 00:04:41.790 LINK fused_ordering 00:04:42.049 CXX test/cpp_headers/env_dpdk.o 00:04:42.049 CXX test/cpp_headers/env.o 00:04:42.049 LINK doorbell_aers 00:04:42.049 CXX test/cpp_headers/event.o 00:04:42.049 LINK pmr_persistence 00:04:42.049 CXX test/cpp_headers/fd_group.o 00:04:42.049 CXX test/cpp_headers/fd.o 00:04:42.049 LINK abort 00:04:42.308 LINK fdp 00:04:42.308 CXX test/cpp_headers/file.o 00:04:42.308 CXX test/cpp_headers/fsdev.o 00:04:42.308 CXX test/cpp_headers/fsdev_module.o 00:04:42.308 CXX test/cpp_headers/ftl.o 00:04:42.308 CXX test/cpp_headers/fuse_dispatcher.o 00:04:42.308 CXX test/cpp_headers/gpt_spec.o 00:04:42.308 CXX test/cpp_headers/hexlify.o 00:04:42.308 CC test/bdev/bdevio/bdevio.o 00:04:42.308 CXX test/cpp_headers/histogram_data.o 00:04:42.568 CXX test/cpp_headers/idxd.o 00:04:42.568 CXX test/cpp_headers/idxd_spec.o 00:04:42.568 CXX test/cpp_headers/init.o 00:04:42.568 CC examples/nvmf/nvmf/nvmf.o 00:04:42.568 CXX test/cpp_headers/ioat.o 00:04:42.568 CXX test/cpp_headers/ioat_spec.o 00:04:42.568 CXX test/cpp_headers/iscsi_spec.o 00:04:42.568 CXX test/cpp_headers/json.o 00:04:42.568 CXX test/cpp_headers/jsonrpc.o 00:04:42.568 CXX test/cpp_headers/keyring.o 00:04:42.826 CXX test/cpp_headers/keyring_module.o 00:04:42.826 CXX test/cpp_headers/likely.o 00:04:42.826 CXX test/cpp_headers/log.o 00:04:42.826 CXX test/cpp_headers/lvol.o 00:04:42.826 CXX test/cpp_headers/md5.o 00:04:42.827 CXX test/cpp_headers/memory.o 00:04:42.827 LINK nvmf 00:04:42.827 LINK bdevio 00:04:42.827 CXX test/cpp_headers/mmio.o 00:04:42.827 CXX test/cpp_headers/nbd.o 00:04:42.827 CXX test/cpp_headers/net.o 00:04:42.827 CXX test/cpp_headers/notify.o 00:04:42.827 CXX test/cpp_headers/nvme.o 00:04:43.086 CXX test/cpp_headers/nvme_intel.o 00:04:43.086 CXX test/cpp_headers/nvme_ocssd.o 00:04:43.086 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:43.086 CXX test/cpp_headers/nvme_spec.o 00:04:43.086 CXX test/cpp_headers/nvme_zns.o 00:04:43.086 CXX test/cpp_headers/nvmf_cmd.o 00:04:43.086 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:43.086 CXX test/cpp_headers/nvmf.o 00:04:43.086 CXX test/cpp_headers/nvmf_spec.o 00:04:43.086 CXX test/cpp_headers/nvmf_transport.o 00:04:43.086 CXX test/cpp_headers/opal.o 00:04:43.086 CXX test/cpp_headers/opal_spec.o 00:04:43.345 CXX test/cpp_headers/pci_ids.o 00:04:43.345 LINK cuse 00:04:43.345 CXX test/cpp_headers/pipe.o 00:04:43.345 CXX test/cpp_headers/queue.o 00:04:43.345 CXX test/cpp_headers/reduce.o 00:04:43.345 CXX test/cpp_headers/rpc.o 00:04:43.345 CXX test/cpp_headers/scheduler.o 00:04:43.345 CXX test/cpp_headers/scsi.o 00:04:43.345 CXX test/cpp_headers/scsi_spec.o 00:04:43.345 CXX test/cpp_headers/sock.o 00:04:43.345 CXX test/cpp_headers/stdinc.o 00:04:43.345 CXX test/cpp_headers/string.o 00:04:43.345 CXX test/cpp_headers/thread.o 00:04:43.345 CXX test/cpp_headers/trace.o 00:04:43.345 CXX test/cpp_headers/trace_parser.o 00:04:43.345 CXX test/cpp_headers/tree.o 00:04:43.604 CXX test/cpp_headers/ublk.o 00:04:43.604 CXX test/cpp_headers/util.o 00:04:43.604 CXX test/cpp_headers/uuid.o 00:04:43.604 CXX test/cpp_headers/version.o 00:04:43.604 CXX test/cpp_headers/vfio_user_pci.o 00:04:43.604 CXX test/cpp_headers/vfio_user_spec.o 00:04:43.604 CXX test/cpp_headers/vhost.o 00:04:43.604 CXX test/cpp_headers/vmd.o 00:04:43.604 CXX test/cpp_headers/xor.o 00:04:43.604 CXX test/cpp_headers/zipf.o 00:04:46.897 LINK esnap 00:04:46.897 00:04:46.897 real 1m21.218s 00:04:46.897 user 5m58.218s 00:04:46.897 sys 1m5.794s 00:04:46.897 01:05:59 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:46.897 01:05:59 make -- common/autotest_common.sh@10 -- $ set +x 00:04:46.897 ************************************ 00:04:46.897 END TEST make 00:04:46.897 ************************************ 00:04:46.897 01:05:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:46.897 01:05:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:46.897 01:05:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:46.897 01:05:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:46.897 01:05:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:46.897 01:05:59 -- pm/common@44 -- $ pid=6197 00:04:46.897 01:05:59 -- pm/common@50 -- $ kill -TERM 6197 00:04:46.897 01:05:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:46.897 01:05:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:46.897 01:05:59 -- pm/common@44 -- $ pid=6199 00:04:46.897 01:05:59 -- pm/common@50 -- $ kill -TERM 6199 00:04:47.157 01:05:59 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:47.157 01:05:59 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:47.157 01:05:59 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:47.157 01:05:59 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:47.157 01:05:59 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.157 01:05:59 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.157 01:05:59 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.157 01:05:59 -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.157 01:05:59 -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.157 01:05:59 -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.157 01:05:59 -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.157 01:05:59 -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.157 01:05:59 -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.157 01:05:59 -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.157 01:05:59 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.157 01:05:59 -- scripts/common.sh@344 -- # case "$op" in 00:04:47.157 01:05:59 -- scripts/common.sh@345 -- # : 1 00:04:47.157 01:05:59 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.157 01:05:59 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.157 01:05:59 -- scripts/common.sh@365 -- # decimal 1 00:04:47.157 01:05:59 -- scripts/common.sh@353 -- # local d=1 00:04:47.157 01:05:59 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.157 01:05:59 -- scripts/common.sh@355 -- # echo 1 00:04:47.157 01:05:59 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.157 01:05:59 -- scripts/common.sh@366 -- # decimal 2 00:04:47.157 01:05:59 -- scripts/common.sh@353 -- # local d=2 00:04:47.157 01:05:59 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.157 01:05:59 -- scripts/common.sh@355 -- # echo 2 00:04:47.157 01:05:59 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.157 01:05:59 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.157 01:05:59 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.157 01:05:59 -- scripts/common.sh@368 -- # return 0 00:04:47.157 01:05:59 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.157 01:05:59 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:47.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.157 --rc genhtml_branch_coverage=1 00:04:47.157 --rc genhtml_function_coverage=1 00:04:47.157 --rc genhtml_legend=1 00:04:47.157 --rc geninfo_all_blocks=1 00:04:47.157 --rc geninfo_unexecuted_blocks=1 00:04:47.157 00:04:47.157 ' 00:04:47.157 01:05:59 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:47.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.157 --rc genhtml_branch_coverage=1 00:04:47.157 --rc genhtml_function_coverage=1 00:04:47.157 --rc genhtml_legend=1 00:04:47.157 --rc geninfo_all_blocks=1 00:04:47.157 --rc geninfo_unexecuted_blocks=1 00:04:47.157 00:04:47.157 ' 00:04:47.157 01:05:59 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:47.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.157 --rc genhtml_branch_coverage=1 00:04:47.157 --rc genhtml_function_coverage=1 00:04:47.157 --rc genhtml_legend=1 00:04:47.157 --rc geninfo_all_blocks=1 00:04:47.157 --rc geninfo_unexecuted_blocks=1 00:04:47.157 00:04:47.157 ' 00:04:47.157 01:05:59 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:47.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.157 --rc genhtml_branch_coverage=1 00:04:47.157 --rc genhtml_function_coverage=1 00:04:47.157 --rc genhtml_legend=1 00:04:47.157 --rc geninfo_all_blocks=1 00:04:47.157 --rc geninfo_unexecuted_blocks=1 00:04:47.157 00:04:47.157 ' 00:04:47.157 01:05:59 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:47.157 01:05:59 -- nvmf/common.sh@7 -- # uname -s 00:04:47.157 01:05:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.157 01:05:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.157 01:05:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.157 01:05:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.157 01:05:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.157 01:05:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.157 01:05:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.157 01:05:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.157 01:05:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.157 01:05:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.157 01:05:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43d29277-c62c-4be4-9b98-829e479f1691 00:04:47.157 01:05:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=43d29277-c62c-4be4-9b98-829e479f1691 00:04:47.157 01:05:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.157 01:05:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.157 01:05:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:47.157 01:05:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.157 01:05:59 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:47.157 01:05:59 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.157 01:05:59 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.157 01:05:59 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.157 01:05:59 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.157 01:05:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.157 01:05:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.157 01:05:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.157 01:05:59 -- paths/export.sh@5 -- # export PATH 00:04:47.157 01:05:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.158 01:05:59 -- nvmf/common.sh@51 -- # : 0 00:04:47.158 01:05:59 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.158 01:05:59 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.158 01:05:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.158 01:05:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.158 01:05:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.158 01:05:59 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.158 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.158 01:05:59 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.158 01:05:59 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.158 01:05:59 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.158 01:05:59 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:47.158 01:05:59 -- spdk/autotest.sh@32 -- # uname -s 00:04:47.158 01:05:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:47.158 01:05:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:47.158 01:05:59 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:47.158 01:05:59 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:47.158 01:05:59 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:47.158 01:05:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:47.158 01:05:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:47.158 01:05:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:47.158 01:05:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:47.158 01:05:59 -- spdk/autotest.sh@48 -- # udevadm_pid=66516 00:04:47.158 01:05:59 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:47.158 01:05:59 -- pm/common@17 -- # local monitor 00:04:47.158 01:05:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:47.158 01:05:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:47.158 01:05:59 -- pm/common@21 -- # date +%s 00:04:47.158 01:05:59 -- pm/common@25 -- # sleep 1 00:04:47.158 01:05:59 -- pm/common@21 -- # date +%s 00:04:47.158 01:05:59 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728954359 00:04:47.158 01:05:59 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728954359 00:04:47.418 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728954359_collect-cpu-load.pm.log 00:04:47.418 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728954359_collect-vmstat.pm.log 00:04:48.356 01:06:00 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:48.356 01:06:00 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:48.356 01:06:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:48.356 01:06:00 -- common/autotest_common.sh@10 -- # set +x 00:04:48.356 01:06:00 -- spdk/autotest.sh@59 -- # create_test_list 00:04:48.356 01:06:00 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:48.356 01:06:00 -- common/autotest_common.sh@10 -- # set +x 00:04:48.356 01:06:00 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:48.356 01:06:00 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:48.356 01:06:00 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:48.356 01:06:00 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:48.356 01:06:00 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:48.356 01:06:00 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:48.356 01:06:00 -- common/autotest_common.sh@1455 -- # uname 00:04:48.356 01:06:00 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:48.356 01:06:00 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:48.356 01:06:00 -- common/autotest_common.sh@1475 -- # uname 00:04:48.356 01:06:00 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:48.356 01:06:00 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:48.356 01:06:00 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:48.356 lcov: LCOV version 1.15 00:04:48.356 01:06:01 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:03.249 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:03.249 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:18.133 01:06:30 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:18.133 01:06:30 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:18.133 01:06:30 -- common/autotest_common.sh@10 -- # set +x 00:05:18.133 01:06:30 -- spdk/autotest.sh@78 -- # rm -f 00:05:18.133 01:06:30 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:18.701 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.701 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:18.701 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:18.701 01:06:31 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:18.701 01:06:31 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:18.701 01:06:31 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:18.701 01:06:31 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:18.701 01:06:31 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:18.701 01:06:31 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:18.701 01:06:31 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:18.701 01:06:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:18.701 01:06:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:18.701 01:06:31 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:18.701 01:06:31 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:18.701 01:06:31 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:18.701 01:06:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:18.701 01:06:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:18.701 01:06:31 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:18.701 01:06:31 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:18.701 01:06:31 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:18.701 01:06:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:18.701 01:06:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:18.701 01:06:31 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:18.701 01:06:31 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:18.701 01:06:31 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:18.701 01:06:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:18.701 01:06:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:18.701 01:06:31 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:18.701 01:06:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:18.701 01:06:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:18.701 01:06:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:18.701 01:06:31 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:18.701 01:06:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:18.701 No valid GPT data, bailing 00:05:18.701 01:06:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:18.701 01:06:31 -- scripts/common.sh@394 -- # pt= 00:05:18.701 01:06:31 -- scripts/common.sh@395 -- # return 1 00:05:18.701 01:06:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:18.701 1+0 records in 00:05:18.701 1+0 records out 00:05:18.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00622526 s, 168 MB/s 00:05:18.701 01:06:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:18.701 01:06:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:18.701 01:06:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:18.701 01:06:31 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:18.701 01:06:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:18.960 No valid GPT data, bailing 00:05:18.960 01:06:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:18.960 01:06:31 -- scripts/common.sh@394 -- # pt= 00:05:18.960 01:06:31 -- scripts/common.sh@395 -- # return 1 00:05:18.960 01:06:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:18.960 1+0 records in 00:05:18.960 1+0 records out 00:05:18.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00606452 s, 173 MB/s 00:05:18.960 01:06:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:18.960 01:06:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:18.960 01:06:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:18.960 01:06:31 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:18.960 01:06:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:18.960 No valid GPT data, bailing 00:05:18.960 01:06:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:18.960 01:06:31 -- scripts/common.sh@394 -- # pt= 00:05:18.960 01:06:31 -- scripts/common.sh@395 -- # return 1 00:05:18.960 01:06:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:18.960 1+0 records in 00:05:18.960 1+0 records out 00:05:18.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00666017 s, 157 MB/s 00:05:18.960 01:06:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:18.960 01:06:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:18.960 01:06:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:18.960 01:06:31 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:18.960 01:06:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:18.960 No valid GPT data, bailing 00:05:18.960 01:06:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:18.960 01:06:31 -- scripts/common.sh@394 -- # pt= 00:05:18.960 01:06:31 -- scripts/common.sh@395 -- # return 1 00:05:18.960 01:06:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:18.960 1+0 records in 00:05:18.960 1+0 records out 00:05:18.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00672983 s, 156 MB/s 00:05:18.960 01:06:31 -- spdk/autotest.sh@105 -- # sync 00:05:19.218 01:06:31 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:19.218 01:06:31 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:19.218 01:06:31 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:21.755 01:06:34 -- spdk/autotest.sh@111 -- # uname -s 00:05:21.755 01:06:34 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:21.755 01:06:34 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:21.755 01:06:34 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:22.693 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:22.693 Hugepages 00:05:22.693 node hugesize free / total 00:05:22.693 node0 1048576kB 0 / 0 00:05:22.693 node0 2048kB 0 / 0 00:05:22.693 00:05:22.693 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:22.693 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:22.693 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:22.953 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:22.953 01:06:35 -- spdk/autotest.sh@117 -- # uname -s 00:05:22.953 01:06:35 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:22.953 01:06:35 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:22.953 01:06:35 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.522 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.781 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.781 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.781 01:06:36 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:25.186 01:06:37 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:25.186 01:06:37 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:25.186 01:06:37 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:25.186 01:06:37 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:25.186 01:06:37 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:25.186 01:06:37 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:25.186 01:06:37 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:25.186 01:06:37 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:25.186 01:06:37 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:25.186 01:06:37 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:25.186 01:06:37 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:25.186 01:06:37 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:25.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.443 Waiting for block devices as requested 00:05:25.443 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:25.702 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:25.702 01:06:38 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:25.702 01:06:38 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:25.702 01:06:38 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:25.702 01:06:38 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:25.702 01:06:38 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:25.702 01:06:38 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:25.702 01:06:38 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:25.702 01:06:38 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:25.702 01:06:38 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:25.702 01:06:38 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:25.702 01:06:38 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:25.702 01:06:38 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:25.702 01:06:38 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:25.702 01:06:38 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:25.702 01:06:38 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:25.702 01:06:38 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:25.702 01:06:38 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:25.702 01:06:38 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:25.702 01:06:38 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:25.702 01:06:38 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:25.702 01:06:38 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:25.702 01:06:38 -- common/autotest_common.sh@1541 -- # continue 00:05:25.702 01:06:38 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:25.703 01:06:38 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:25.703 01:06:38 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:25.703 01:06:38 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:25.703 01:06:38 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:25.703 01:06:38 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:25.703 01:06:38 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:25.703 01:06:38 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:25.703 01:06:38 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:25.703 01:06:38 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:25.703 01:06:38 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:25.703 01:06:38 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:25.703 01:06:38 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:25.703 01:06:38 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:25.703 01:06:38 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:25.703 01:06:38 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:25.703 01:06:38 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:25.703 01:06:38 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:25.703 01:06:38 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:25.703 01:06:38 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:25.703 01:06:38 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:25.703 01:06:38 -- common/autotest_common.sh@1541 -- # continue 00:05:25.703 01:06:38 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:25.703 01:06:38 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:25.703 01:06:38 -- common/autotest_common.sh@10 -- # set +x 00:05:25.962 01:06:38 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:25.962 01:06:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:25.962 01:06:38 -- common/autotest_common.sh@10 -- # set +x 00:05:25.962 01:06:38 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:26.532 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.791 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.791 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.791 01:06:39 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:26.791 01:06:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:26.791 01:06:39 -- common/autotest_common.sh@10 -- # set +x 00:05:26.791 01:06:39 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:26.791 01:06:39 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:26.791 01:06:39 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:26.791 01:06:39 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:26.791 01:06:39 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:26.791 01:06:39 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:26.791 01:06:39 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:26.791 01:06:39 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:26.791 01:06:39 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:26.791 01:06:39 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:26.791 01:06:39 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:26.791 01:06:39 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:26.791 01:06:39 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:27.052 01:06:39 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:27.052 01:06:39 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:27.052 01:06:39 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:27.052 01:06:39 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:27.052 01:06:39 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:27.052 01:06:39 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:27.052 01:06:39 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:27.052 01:06:39 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:27.052 01:06:39 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:27.052 01:06:39 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:27.052 01:06:39 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:27.052 01:06:39 -- common/autotest_common.sh@1570 -- # return 0 00:05:27.052 01:06:39 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:27.052 01:06:39 -- common/autotest_common.sh@1578 -- # return 0 00:05:27.052 01:06:39 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:27.052 01:06:39 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:27.052 01:06:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:27.052 01:06:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:27.052 01:06:39 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:27.052 01:06:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:27.052 01:06:39 -- common/autotest_common.sh@10 -- # set +x 00:05:27.052 01:06:39 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:27.052 01:06:39 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:27.052 01:06:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.052 01:06:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.052 01:06:39 -- common/autotest_common.sh@10 -- # set +x 00:05:27.052 ************************************ 00:05:27.052 START TEST env 00:05:27.052 ************************************ 00:05:27.052 01:06:39 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:27.052 * Looking for test storage... 00:05:27.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:27.052 01:06:39 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:27.052 01:06:39 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:27.052 01:06:39 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:27.313 01:06:39 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:27.313 01:06:39 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.313 01:06:39 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.313 01:06:39 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.313 01:06:39 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.313 01:06:39 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.313 01:06:39 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.313 01:06:39 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.313 01:06:39 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.313 01:06:39 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.313 01:06:39 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.313 01:06:39 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.313 01:06:39 env -- scripts/common.sh@344 -- # case "$op" in 00:05:27.313 01:06:39 env -- scripts/common.sh@345 -- # : 1 00:05:27.313 01:06:39 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.313 01:06:39 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.313 01:06:39 env -- scripts/common.sh@365 -- # decimal 1 00:05:27.313 01:06:39 env -- scripts/common.sh@353 -- # local d=1 00:05:27.313 01:06:39 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.313 01:06:39 env -- scripts/common.sh@355 -- # echo 1 00:05:27.313 01:06:39 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.313 01:06:39 env -- scripts/common.sh@366 -- # decimal 2 00:05:27.313 01:06:39 env -- scripts/common.sh@353 -- # local d=2 00:05:27.313 01:06:39 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.313 01:06:39 env -- scripts/common.sh@355 -- # echo 2 00:05:27.313 01:06:39 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.313 01:06:39 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.313 01:06:39 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.313 01:06:39 env -- scripts/common.sh@368 -- # return 0 00:05:27.313 01:06:39 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.313 01:06:39 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:27.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.313 --rc genhtml_branch_coverage=1 00:05:27.313 --rc genhtml_function_coverage=1 00:05:27.313 --rc genhtml_legend=1 00:05:27.313 --rc geninfo_all_blocks=1 00:05:27.313 --rc geninfo_unexecuted_blocks=1 00:05:27.313 00:05:27.313 ' 00:05:27.313 01:06:39 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:27.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.313 --rc genhtml_branch_coverage=1 00:05:27.313 --rc genhtml_function_coverage=1 00:05:27.313 --rc genhtml_legend=1 00:05:27.313 --rc geninfo_all_blocks=1 00:05:27.313 --rc geninfo_unexecuted_blocks=1 00:05:27.313 00:05:27.313 ' 00:05:27.313 01:06:39 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:27.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.313 --rc genhtml_branch_coverage=1 00:05:27.313 --rc genhtml_function_coverage=1 00:05:27.313 --rc genhtml_legend=1 00:05:27.313 --rc geninfo_all_blocks=1 00:05:27.313 --rc geninfo_unexecuted_blocks=1 00:05:27.313 00:05:27.313 ' 00:05:27.313 01:06:39 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:27.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.313 --rc genhtml_branch_coverage=1 00:05:27.313 --rc genhtml_function_coverage=1 00:05:27.313 --rc genhtml_legend=1 00:05:27.313 --rc geninfo_all_blocks=1 00:05:27.313 --rc geninfo_unexecuted_blocks=1 00:05:27.313 00:05:27.313 ' 00:05:27.313 01:06:39 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:27.313 01:06:39 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.313 01:06:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.313 01:06:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.313 ************************************ 00:05:27.313 START TEST env_memory 00:05:27.313 ************************************ 00:05:27.314 01:06:39 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:27.314 00:05:27.314 00:05:27.314 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.314 http://cunit.sourceforge.net/ 00:05:27.314 00:05:27.314 00:05:27.314 Suite: memory 00:05:27.314 Test: alloc and free memory map ...[2024-10-15 01:06:39.923372] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:27.314 passed 00:05:27.314 Test: mem map translation ...[2024-10-15 01:06:39.966628] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:27.314 [2024-10-15 01:06:39.966668] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:27.314 [2024-10-15 01:06:39.966739] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:27.314 [2024-10-15 01:06:39.966754] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:27.314 passed 00:05:27.574 Test: mem map registration ...[2024-10-15 01:06:40.036475] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:27.574 [2024-10-15 01:06:40.036529] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:27.574 passed 00:05:27.574 Test: mem map adjacent registrations ...passed 00:05:27.574 00:05:27.574 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.574 suites 1 1 n/a 0 0 00:05:27.574 tests 4 4 4 0 0 00:05:27.574 asserts 152 152 152 0 n/a 00:05:27.574 00:05:27.574 Elapsed time = 0.239 seconds 00:05:27.574 00:05:27.574 real 0m0.292s 00:05:27.574 user 0m0.252s 00:05:27.574 sys 0m0.028s 00:05:27.574 01:06:40 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.574 01:06:40 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:27.574 ************************************ 00:05:27.574 END TEST env_memory 00:05:27.574 ************************************ 00:05:27.574 01:06:40 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:27.574 01:06:40 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.574 01:06:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.574 01:06:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.574 ************************************ 00:05:27.574 START TEST env_vtophys 00:05:27.574 ************************************ 00:05:27.574 01:06:40 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:27.574 EAL: lib.eal log level changed from notice to debug 00:05:27.574 EAL: Detected lcore 0 as core 0 on socket 0 00:05:27.574 EAL: Detected lcore 1 as core 0 on socket 0 00:05:27.574 EAL: Detected lcore 2 as core 0 on socket 0 00:05:27.574 EAL: Detected lcore 3 as core 0 on socket 0 00:05:27.574 EAL: Detected lcore 4 as core 0 on socket 0 00:05:27.574 EAL: Detected lcore 5 as core 0 on socket 0 00:05:27.574 EAL: Detected lcore 6 as core 0 on socket 0 00:05:27.574 EAL: Detected lcore 7 as core 0 on socket 0 00:05:27.574 EAL: Detected lcore 8 as core 0 on socket 0 00:05:27.574 EAL: Detected lcore 9 as core 0 on socket 0 00:05:27.574 EAL: Maximum logical cores by configuration: 128 00:05:27.574 EAL: Detected CPU lcores: 10 00:05:27.574 EAL: Detected NUMA nodes: 1 00:05:27.574 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:27.574 EAL: Detected shared linkage of DPDK 00:05:27.574 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:27.575 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:27.575 EAL: Registered [vdev] bus. 00:05:27.575 EAL: bus.vdev log level changed from disabled to notice 00:05:27.575 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:27.575 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:27.575 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:27.575 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:27.575 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:27.575 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:27.575 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:27.575 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:27.575 EAL: No shared files mode enabled, IPC will be disabled 00:05:27.575 EAL: No shared files mode enabled, IPC is disabled 00:05:27.575 EAL: Selected IOVA mode 'PA' 00:05:27.575 EAL: Probing VFIO support... 00:05:27.575 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:27.575 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:27.575 EAL: Ask a virtual area of 0x2e000 bytes 00:05:27.575 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:27.575 EAL: Setting up physically contiguous memory... 00:05:27.575 EAL: Setting maximum number of open files to 524288 00:05:27.575 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:27.575 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:27.575 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.575 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:27.575 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.575 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.575 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:27.575 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:27.575 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.575 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:27.575 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.575 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.575 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:27.575 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:27.575 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.575 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:27.575 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.575 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.575 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:27.575 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:27.575 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.575 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:27.575 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.575 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.575 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:27.575 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:27.575 EAL: Hugepages will be freed exactly as allocated. 00:05:27.575 EAL: No shared files mode enabled, IPC is disabled 00:05:27.575 EAL: No shared files mode enabled, IPC is disabled 00:05:27.835 EAL: TSC frequency is ~2290000 KHz 00:05:27.835 EAL: Main lcore 0 is ready (tid=7fef29c3da40;cpuset=[0]) 00:05:27.835 EAL: Trying to obtain current memory policy. 00:05:27.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.835 EAL: Restoring previous memory policy: 0 00:05:27.835 EAL: request: mp_malloc_sync 00:05:27.835 EAL: No shared files mode enabled, IPC is disabled 00:05:27.835 EAL: Heap on socket 0 was expanded by 2MB 00:05:27.835 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:27.835 EAL: No shared files mode enabled, IPC is disabled 00:05:27.835 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:27.835 EAL: Mem event callback 'spdk:(nil)' registered 00:05:27.835 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:27.835 00:05:27.835 00:05:27.835 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.835 http://cunit.sourceforge.net/ 00:05:27.835 00:05:27.835 00:05:27.835 Suite: components_suite 00:05:28.094 Test: vtophys_malloc_test ...passed 00:05:28.094 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:28.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.094 EAL: Restoring previous memory policy: 4 00:05:28.094 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.094 EAL: request: mp_malloc_sync 00:05:28.094 EAL: No shared files mode enabled, IPC is disabled 00:05:28.094 EAL: Heap on socket 0 was expanded by 4MB 00:05:28.094 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.094 EAL: request: mp_malloc_sync 00:05:28.094 EAL: No shared files mode enabled, IPC is disabled 00:05:28.094 EAL: Heap on socket 0 was shrunk by 4MB 00:05:28.094 EAL: Trying to obtain current memory policy. 00:05:28.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.094 EAL: Restoring previous memory policy: 4 00:05:28.094 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.094 EAL: request: mp_malloc_sync 00:05:28.094 EAL: No shared files mode enabled, IPC is disabled 00:05:28.094 EAL: Heap on socket 0 was expanded by 6MB 00:05:28.094 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.094 EAL: request: mp_malloc_sync 00:05:28.094 EAL: No shared files mode enabled, IPC is disabled 00:05:28.094 EAL: Heap on socket 0 was shrunk by 6MB 00:05:28.094 EAL: Trying to obtain current memory policy. 00:05:28.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.094 EAL: Restoring previous memory policy: 4 00:05:28.094 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.094 EAL: request: mp_malloc_sync 00:05:28.094 EAL: No shared files mode enabled, IPC is disabled 00:05:28.094 EAL: Heap on socket 0 was expanded by 10MB 00:05:28.094 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.094 EAL: request: mp_malloc_sync 00:05:28.094 EAL: No shared files mode enabled, IPC is disabled 00:05:28.094 EAL: Heap on socket 0 was shrunk by 10MB 00:05:28.094 EAL: Trying to obtain current memory policy. 00:05:28.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.094 EAL: Restoring previous memory policy: 4 00:05:28.094 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.094 EAL: request: mp_malloc_sync 00:05:28.094 EAL: No shared files mode enabled, IPC is disabled 00:05:28.094 EAL: Heap on socket 0 was expanded by 18MB 00:05:28.094 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.094 EAL: request: mp_malloc_sync 00:05:28.094 EAL: No shared files mode enabled, IPC is disabled 00:05:28.094 EAL: Heap on socket 0 was shrunk by 18MB 00:05:28.094 EAL: Trying to obtain current memory policy. 00:05:28.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.094 EAL: Restoring previous memory policy: 4 00:05:28.094 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.094 EAL: request: mp_malloc_sync 00:05:28.094 EAL: No shared files mode enabled, IPC is disabled 00:05:28.094 EAL: Heap on socket 0 was expanded by 34MB 00:05:28.094 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.094 EAL: request: mp_malloc_sync 00:05:28.094 EAL: No shared files mode enabled, IPC is disabled 00:05:28.094 EAL: Heap on socket 0 was shrunk by 34MB 00:05:28.094 EAL: Trying to obtain current memory policy. 00:05:28.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.094 EAL: Restoring previous memory policy: 4 00:05:28.094 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.094 EAL: request: mp_malloc_sync 00:05:28.094 EAL: No shared files mode enabled, IPC is disabled 00:05:28.094 EAL: Heap on socket 0 was expanded by 66MB 00:05:28.094 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.094 EAL: request: mp_malloc_sync 00:05:28.094 EAL: No shared files mode enabled, IPC is disabled 00:05:28.094 EAL: Heap on socket 0 was shrunk by 66MB 00:05:28.094 EAL: Trying to obtain current memory policy. 00:05:28.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.094 EAL: Restoring previous memory policy: 4 00:05:28.094 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.094 EAL: request: mp_malloc_sync 00:05:28.094 EAL: No shared files mode enabled, IPC is disabled 00:05:28.094 EAL: Heap on socket 0 was expanded by 130MB 00:05:28.094 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.354 EAL: request: mp_malloc_sync 00:05:28.354 EAL: No shared files mode enabled, IPC is disabled 00:05:28.354 EAL: Heap on socket 0 was shrunk by 130MB 00:05:28.354 EAL: Trying to obtain current memory policy. 00:05:28.354 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.354 EAL: Restoring previous memory policy: 4 00:05:28.354 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.354 EAL: request: mp_malloc_sync 00:05:28.354 EAL: No shared files mode enabled, IPC is disabled 00:05:28.354 EAL: Heap on socket 0 was expanded by 258MB 00:05:28.354 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.354 EAL: request: mp_malloc_sync 00:05:28.354 EAL: No shared files mode enabled, IPC is disabled 00:05:28.354 EAL: Heap on socket 0 was shrunk by 258MB 00:05:28.354 EAL: Trying to obtain current memory policy. 00:05:28.354 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.354 EAL: Restoring previous memory policy: 4 00:05:28.354 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.354 EAL: request: mp_malloc_sync 00:05:28.354 EAL: No shared files mode enabled, IPC is disabled 00:05:28.354 EAL: Heap on socket 0 was expanded by 514MB 00:05:28.613 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.613 EAL: request: mp_malloc_sync 00:05:28.613 EAL: No shared files mode enabled, IPC is disabled 00:05:28.613 EAL: Heap on socket 0 was shrunk by 514MB 00:05:28.613 EAL: Trying to obtain current memory policy. 00:05:28.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.873 EAL: Restoring previous memory policy: 4 00:05:28.873 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.873 EAL: request: mp_malloc_sync 00:05:28.873 EAL: No shared files mode enabled, IPC is disabled 00:05:28.873 EAL: Heap on socket 0 was expanded by 1026MB 00:05:29.133 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.133 passed 00:05:29.133 00:05:29.133 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.133 suites 1 1 n/a 0 0 00:05:29.133 tests 2 2 2 0 0 00:05:29.133 asserts 5274 5274 5274 0 n/a 00:05:29.133 00:05:29.133 Elapsed time = 1.357 seconds 00:05:29.133 EAL: request: mp_malloc_sync 00:05:29.133 EAL: No shared files mode enabled, IPC is disabled 00:05:29.133 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:29.133 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.133 EAL: request: mp_malloc_sync 00:05:29.133 EAL: No shared files mode enabled, IPC is disabled 00:05:29.133 EAL: Heap on socket 0 was shrunk by 2MB 00:05:29.133 EAL: No shared files mode enabled, IPC is disabled 00:05:29.133 EAL: No shared files mode enabled, IPC is disabled 00:05:29.133 EAL: No shared files mode enabled, IPC is disabled 00:05:29.133 00:05:29.133 real 0m1.605s 00:05:29.133 user 0m0.747s 00:05:29.134 sys 0m0.721s 00:05:29.134 01:06:41 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.134 01:06:41 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:29.134 ************************************ 00:05:29.134 END TEST env_vtophys 00:05:29.134 ************************************ 00:05:29.394 01:06:41 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:29.394 01:06:41 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.394 01:06:41 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.394 01:06:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.394 ************************************ 00:05:29.394 START TEST env_pci 00:05:29.394 ************************************ 00:05:29.394 01:06:41 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:29.394 00:05:29.394 00:05:29.394 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.394 http://cunit.sourceforge.net/ 00:05:29.394 00:05:29.394 00:05:29.394 Suite: pci 00:05:29.394 Test: pci_hook ...[2024-10-15 01:06:41.908416] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68776 has claimed it 00:05:29.394 passed 00:05:29.394 00:05:29.394 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.394 suites 1 1 n/a 0 0 00:05:29.394 tests 1 1 1 0 0 00:05:29.394 asserts 25 25 25 0 n/a 00:05:29.394 00:05:29.394 Elapsed time = 0.009 secondsEAL: Cannot find device (10000:00:01.0) 00:05:29.394 EAL: Failed to attach device on primary process 00:05:29.394 00:05:29.394 00:05:29.394 real 0m0.092s 00:05:29.394 user 0m0.040s 00:05:29.394 sys 0m0.050s 00:05:29.394 01:06:41 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.394 01:06:41 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:29.394 ************************************ 00:05:29.394 END TEST env_pci 00:05:29.394 ************************************ 00:05:29.394 01:06:42 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:29.394 01:06:42 env -- env/env.sh@15 -- # uname 00:05:29.394 01:06:42 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:29.394 01:06:42 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:29.394 01:06:42 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:29.394 01:06:42 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:29.394 01:06:42 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.394 01:06:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.394 ************************************ 00:05:29.394 START TEST env_dpdk_post_init 00:05:29.394 ************************************ 00:05:29.394 01:06:42 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:29.394 EAL: Detected CPU lcores: 10 00:05:29.394 EAL: Detected NUMA nodes: 1 00:05:29.394 EAL: Detected shared linkage of DPDK 00:05:29.394 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:29.394 EAL: Selected IOVA mode 'PA' 00:05:29.654 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.654 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:29.654 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:29.654 Starting DPDK initialization... 00:05:29.654 Starting SPDK post initialization... 00:05:29.654 SPDK NVMe probe 00:05:29.654 Attaching to 0000:00:10.0 00:05:29.654 Attaching to 0000:00:11.0 00:05:29.654 Attached to 0000:00:10.0 00:05:29.654 Attached to 0000:00:11.0 00:05:29.654 Cleaning up... 00:05:29.654 00:05:29.654 real 0m0.234s 00:05:29.654 user 0m0.063s 00:05:29.654 sys 0m0.072s 00:05:29.654 01:06:42 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.654 01:06:42 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.654 ************************************ 00:05:29.654 END TEST env_dpdk_post_init 00:05:29.654 ************************************ 00:05:29.654 01:06:42 env -- env/env.sh@26 -- # uname 00:05:29.654 01:06:42 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:29.654 01:06:42 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.654 01:06:42 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.654 01:06:42 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.654 01:06:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.654 ************************************ 00:05:29.654 START TEST env_mem_callbacks 00:05:29.654 ************************************ 00:05:29.654 01:06:42 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.914 EAL: Detected CPU lcores: 10 00:05:29.914 EAL: Detected NUMA nodes: 1 00:05:29.914 EAL: Detected shared linkage of DPDK 00:05:29.914 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:29.914 EAL: Selected IOVA mode 'PA' 00:05:29.914 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.914 00:05:29.914 00:05:29.914 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.914 http://cunit.sourceforge.net/ 00:05:29.914 00:05:29.914 00:05:29.914 Suite: memory 00:05:29.914 Test: test ... 00:05:29.914 register 0x200000200000 2097152 00:05:29.914 malloc 3145728 00:05:29.914 register 0x200000400000 4194304 00:05:29.914 buf 0x200000500000 len 3145728 PASSED 00:05:29.914 malloc 64 00:05:29.914 buf 0x2000004fff40 len 64 PASSED 00:05:29.914 malloc 4194304 00:05:29.914 register 0x200000800000 6291456 00:05:29.914 buf 0x200000a00000 len 4194304 PASSED 00:05:29.914 free 0x200000500000 3145728 00:05:29.914 free 0x2000004fff40 64 00:05:29.914 unregister 0x200000400000 4194304 PASSED 00:05:29.914 free 0x200000a00000 4194304 00:05:29.914 unregister 0x200000800000 6291456 PASSED 00:05:29.914 malloc 8388608 00:05:29.914 register 0x200000400000 10485760 00:05:29.914 buf 0x200000600000 len 8388608 PASSED 00:05:29.914 free 0x200000600000 8388608 00:05:29.914 unregister 0x200000400000 10485760 PASSED 00:05:29.914 passed 00:05:29.914 00:05:29.914 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.914 suites 1 1 n/a 0 0 00:05:29.914 tests 1 1 1 0 0 00:05:29.914 asserts 15 15 15 0 n/a 00:05:29.914 00:05:29.914 Elapsed time = 0.011 seconds 00:05:29.914 00:05:29.914 real 0m0.180s 00:05:29.914 user 0m0.033s 00:05:29.914 sys 0m0.045s 00:05:29.914 01:06:42 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.914 01:06:42 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:29.914 ************************************ 00:05:29.914 END TEST env_mem_callbacks 00:05:29.914 ************************************ 00:05:29.914 00:05:29.914 real 0m2.957s 00:05:29.914 user 0m1.349s 00:05:29.914 sys 0m1.282s 00:05:29.914 01:06:42 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.914 01:06:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.914 ************************************ 00:05:29.914 END TEST env 00:05:29.914 ************************************ 00:05:29.914 01:06:42 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:29.914 01:06:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.914 01:06:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.914 01:06:42 -- common/autotest_common.sh@10 -- # set +x 00:05:30.174 ************************************ 00:05:30.175 START TEST rpc 00:05:30.175 ************************************ 00:05:30.175 01:06:42 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:30.175 * Looking for test storage... 00:05:30.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.175 01:06:42 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:30.175 01:06:42 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:30.175 01:06:42 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:30.175 01:06:42 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:30.175 01:06:42 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.175 01:06:42 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.175 01:06:42 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.175 01:06:42 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.175 01:06:42 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.175 01:06:42 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.175 01:06:42 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.175 01:06:42 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.175 01:06:42 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.175 01:06:42 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.175 01:06:42 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.175 01:06:42 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:30.175 01:06:42 rpc -- scripts/common.sh@345 -- # : 1 00:05:30.175 01:06:42 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.175 01:06:42 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.175 01:06:42 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:30.175 01:06:42 rpc -- scripts/common.sh@353 -- # local d=1 00:05:30.175 01:06:42 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.175 01:06:42 rpc -- scripts/common.sh@355 -- # echo 1 00:05:30.175 01:06:42 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.175 01:06:42 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:30.175 01:06:42 rpc -- scripts/common.sh@353 -- # local d=2 00:05:30.175 01:06:42 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.175 01:06:42 rpc -- scripts/common.sh@355 -- # echo 2 00:05:30.175 01:06:42 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.175 01:06:42 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.175 01:06:42 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.175 01:06:42 rpc -- scripts/common.sh@368 -- # return 0 00:05:30.175 01:06:42 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.175 01:06:42 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:30.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.175 --rc genhtml_branch_coverage=1 00:05:30.175 --rc genhtml_function_coverage=1 00:05:30.175 --rc genhtml_legend=1 00:05:30.175 --rc geninfo_all_blocks=1 00:05:30.175 --rc geninfo_unexecuted_blocks=1 00:05:30.175 00:05:30.175 ' 00:05:30.175 01:06:42 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:30.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.175 --rc genhtml_branch_coverage=1 00:05:30.175 --rc genhtml_function_coverage=1 00:05:30.175 --rc genhtml_legend=1 00:05:30.175 --rc geninfo_all_blocks=1 00:05:30.175 --rc geninfo_unexecuted_blocks=1 00:05:30.175 00:05:30.175 ' 00:05:30.175 01:06:42 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:30.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.175 --rc genhtml_branch_coverage=1 00:05:30.175 --rc genhtml_function_coverage=1 00:05:30.175 --rc genhtml_legend=1 00:05:30.175 --rc geninfo_all_blocks=1 00:05:30.175 --rc geninfo_unexecuted_blocks=1 00:05:30.175 00:05:30.175 ' 00:05:30.175 01:06:42 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:30.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.175 --rc genhtml_branch_coverage=1 00:05:30.175 --rc genhtml_function_coverage=1 00:05:30.175 --rc genhtml_legend=1 00:05:30.175 --rc geninfo_all_blocks=1 00:05:30.175 --rc geninfo_unexecuted_blocks=1 00:05:30.175 00:05:30.175 ' 00:05:30.175 01:06:42 rpc -- rpc/rpc.sh@65 -- # spdk_pid=68903 00:05:30.175 01:06:42 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:30.175 01:06:42 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.175 01:06:42 rpc -- rpc/rpc.sh@67 -- # waitforlisten 68903 00:05:30.175 01:06:42 rpc -- common/autotest_common.sh@831 -- # '[' -z 68903 ']' 00:05:30.175 01:06:42 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.175 01:06:42 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.175 01:06:42 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.175 01:06:42 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.175 01:06:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.436 [2024-10-15 01:06:42.962061] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:05:30.436 [2024-10-15 01:06:42.962197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68903 ] 00:05:30.436 [2024-10-15 01:06:43.107648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.436 [2024-10-15 01:06:43.134827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:30.436 [2024-10-15 01:06:43.134896] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68903' to capture a snapshot of events at runtime. 00:05:30.436 [2024-10-15 01:06:43.134921] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:30.436 [2024-10-15 01:06:43.134930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:30.436 [2024-10-15 01:06:43.134939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68903 for offline analysis/debug. 00:05:30.436 [2024-10-15 01:06:43.135334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.379 01:06:43 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.379 01:06:43 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:31.379 01:06:43 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:31.379 01:06:43 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:31.379 01:06:43 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:31.379 01:06:43 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:31.379 01:06:43 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.379 01:06:43 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.379 01:06:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.379 ************************************ 00:05:31.379 START TEST rpc_integrity 00:05:31.379 ************************************ 00:05:31.379 01:06:43 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:31.379 01:06:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:31.379 01:06:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.379 01:06:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.379 01:06:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.379 01:06:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:31.379 01:06:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:31.379 01:06:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:31.379 01:06:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:31.379 01:06:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.379 01:06:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.379 01:06:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.379 01:06:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:31.379 01:06:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:31.379 01:06:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.379 01:06:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.379 01:06:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.379 01:06:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:31.379 { 00:05:31.379 "name": "Malloc0", 00:05:31.379 "aliases": [ 00:05:31.379 "0b77a3c8-ae31-499e-954d-02950bae2654" 00:05:31.379 ], 00:05:31.379 "product_name": "Malloc disk", 00:05:31.379 "block_size": 512, 00:05:31.379 "num_blocks": 16384, 00:05:31.379 "uuid": "0b77a3c8-ae31-499e-954d-02950bae2654", 00:05:31.379 "assigned_rate_limits": { 00:05:31.379 "rw_ios_per_sec": 0, 00:05:31.379 "rw_mbytes_per_sec": 0, 00:05:31.379 "r_mbytes_per_sec": 0, 00:05:31.379 "w_mbytes_per_sec": 0 00:05:31.379 }, 00:05:31.379 "claimed": false, 00:05:31.379 "zoned": false, 00:05:31.379 "supported_io_types": { 00:05:31.379 "read": true, 00:05:31.379 "write": true, 00:05:31.379 "unmap": true, 00:05:31.379 "flush": true, 00:05:31.379 "reset": true, 00:05:31.379 "nvme_admin": false, 00:05:31.379 "nvme_io": false, 00:05:31.379 "nvme_io_md": false, 00:05:31.379 "write_zeroes": true, 00:05:31.379 "zcopy": true, 00:05:31.379 "get_zone_info": false, 00:05:31.379 "zone_management": false, 00:05:31.379 "zone_append": false, 00:05:31.379 "compare": false, 00:05:31.379 "compare_and_write": false, 00:05:31.379 "abort": true, 00:05:31.379 "seek_hole": false, 00:05:31.379 "seek_data": false, 00:05:31.379 "copy": true, 00:05:31.379 "nvme_iov_md": false 00:05:31.379 }, 00:05:31.379 "memory_domains": [ 00:05:31.379 { 00:05:31.379 "dma_device_id": "system", 00:05:31.379 "dma_device_type": 1 00:05:31.379 }, 00:05:31.379 { 00:05:31.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.379 "dma_device_type": 2 00:05:31.379 } 00:05:31.379 ], 00:05:31.379 "driver_specific": {} 00:05:31.379 } 00:05:31.379 ]' 00:05:31.379 01:06:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:31.379 01:06:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:31.379 01:06:43 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:31.379 01:06:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.379 01:06:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.379 [2024-10-15 01:06:43.936455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:31.379 [2024-10-15 01:06:43.936543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:31.379 [2024-10-15 01:06:43.936569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:05:31.379 [2024-10-15 01:06:43.936579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:31.379 [2024-10-15 01:06:43.938844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:31.379 [2024-10-15 01:06:43.938880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:31.379 Passthru0 00:05:31.379 01:06:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.379 01:06:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:31.379 01:06:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.379 01:06:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.379 01:06:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.379 01:06:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:31.379 { 00:05:31.379 "name": "Malloc0", 00:05:31.379 "aliases": [ 00:05:31.379 "0b77a3c8-ae31-499e-954d-02950bae2654" 00:05:31.379 ], 00:05:31.379 "product_name": "Malloc disk", 00:05:31.379 "block_size": 512, 00:05:31.379 "num_blocks": 16384, 00:05:31.379 "uuid": "0b77a3c8-ae31-499e-954d-02950bae2654", 00:05:31.379 "assigned_rate_limits": { 00:05:31.379 "rw_ios_per_sec": 0, 00:05:31.379 "rw_mbytes_per_sec": 0, 00:05:31.379 "r_mbytes_per_sec": 0, 00:05:31.379 "w_mbytes_per_sec": 0 00:05:31.379 }, 00:05:31.379 "claimed": true, 00:05:31.379 "claim_type": "exclusive_write", 00:05:31.379 "zoned": false, 00:05:31.379 "supported_io_types": { 00:05:31.379 "read": true, 00:05:31.379 "write": true, 00:05:31.379 "unmap": true, 00:05:31.379 "flush": true, 00:05:31.379 "reset": true, 00:05:31.379 "nvme_admin": false, 00:05:31.379 "nvme_io": false, 00:05:31.379 "nvme_io_md": false, 00:05:31.379 "write_zeroes": true, 00:05:31.379 "zcopy": true, 00:05:31.379 "get_zone_info": false, 00:05:31.379 "zone_management": false, 00:05:31.379 "zone_append": false, 00:05:31.379 "compare": false, 00:05:31.379 "compare_and_write": false, 00:05:31.379 "abort": true, 00:05:31.379 "seek_hole": false, 00:05:31.379 "seek_data": false, 00:05:31.379 "copy": true, 00:05:31.379 "nvme_iov_md": false 00:05:31.379 }, 00:05:31.379 "memory_domains": [ 00:05:31.379 { 00:05:31.379 "dma_device_id": "system", 00:05:31.379 "dma_device_type": 1 00:05:31.379 }, 00:05:31.379 { 00:05:31.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.379 "dma_device_type": 2 00:05:31.379 } 00:05:31.379 ], 00:05:31.379 "driver_specific": {} 00:05:31.379 }, 00:05:31.379 { 00:05:31.379 "name": "Passthru0", 00:05:31.379 "aliases": [ 00:05:31.379 "b41ac812-854b-5d17-a2d8-150be234b2a1" 00:05:31.379 ], 00:05:31.379 "product_name": "passthru", 00:05:31.379 "block_size": 512, 00:05:31.379 "num_blocks": 16384, 00:05:31.379 "uuid": "b41ac812-854b-5d17-a2d8-150be234b2a1", 00:05:31.379 "assigned_rate_limits": { 00:05:31.379 "rw_ios_per_sec": 0, 00:05:31.379 "rw_mbytes_per_sec": 0, 00:05:31.379 "r_mbytes_per_sec": 0, 00:05:31.380 "w_mbytes_per_sec": 0 00:05:31.380 }, 00:05:31.380 "claimed": false, 00:05:31.380 "zoned": false, 00:05:31.380 "supported_io_types": { 00:05:31.380 "read": true, 00:05:31.380 "write": true, 00:05:31.380 "unmap": true, 00:05:31.380 "flush": true, 00:05:31.380 "reset": true, 00:05:31.380 "nvme_admin": false, 00:05:31.380 "nvme_io": false, 00:05:31.380 "nvme_io_md": false, 00:05:31.380 "write_zeroes": true, 00:05:31.380 "zcopy": true, 00:05:31.380 "get_zone_info": false, 00:05:31.380 "zone_management": false, 00:05:31.380 "zone_append": false, 00:05:31.380 "compare": false, 00:05:31.380 "compare_and_write": false, 00:05:31.380 "abort": true, 00:05:31.380 "seek_hole": false, 00:05:31.380 "seek_data": false, 00:05:31.380 "copy": true, 00:05:31.380 "nvme_iov_md": false 00:05:31.380 }, 00:05:31.380 "memory_domains": [ 00:05:31.380 { 00:05:31.380 "dma_device_id": "system", 00:05:31.380 "dma_device_type": 1 00:05:31.380 }, 00:05:31.380 { 00:05:31.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.380 "dma_device_type": 2 00:05:31.380 } 00:05:31.380 ], 00:05:31.380 "driver_specific": { 00:05:31.380 "passthru": { 00:05:31.380 "name": "Passthru0", 00:05:31.380 "base_bdev_name": "Malloc0" 00:05:31.380 } 00:05:31.380 } 00:05:31.380 } 00:05:31.380 ]' 00:05:31.380 01:06:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:31.380 01:06:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:31.380 01:06:44 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:31.380 01:06:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.380 01:06:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.380 01:06:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.380 01:06:44 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:31.380 01:06:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.380 01:06:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.380 01:06:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.380 01:06:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:31.380 01:06:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.380 01:06:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.380 01:06:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.380 01:06:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:31.380 01:06:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:31.380 01:06:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:31.380 00:05:31.380 real 0m0.314s 00:05:31.380 user 0m0.178s 00:05:31.380 sys 0m0.059s 00:05:31.380 01:06:44 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.380 01:06:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.380 ************************************ 00:05:31.380 END TEST rpc_integrity 00:05:31.380 ************************************ 00:05:31.640 01:06:44 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:31.640 01:06:44 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.640 01:06:44 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.640 01:06:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.640 ************************************ 00:05:31.640 START TEST rpc_plugins 00:05:31.640 ************************************ 00:05:31.640 01:06:44 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:31.640 01:06:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:31.640 01:06:44 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.640 01:06:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.640 01:06:44 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.640 01:06:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:31.640 01:06:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:31.640 01:06:44 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.640 01:06:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.640 01:06:44 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.640 01:06:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:31.640 { 00:05:31.640 "name": "Malloc1", 00:05:31.640 "aliases": [ 00:05:31.640 "76e6098a-18a6-46dd-bd82-3c97f5eabc55" 00:05:31.640 ], 00:05:31.640 "product_name": "Malloc disk", 00:05:31.640 "block_size": 4096, 00:05:31.640 "num_blocks": 256, 00:05:31.640 "uuid": "76e6098a-18a6-46dd-bd82-3c97f5eabc55", 00:05:31.640 "assigned_rate_limits": { 00:05:31.640 "rw_ios_per_sec": 0, 00:05:31.640 "rw_mbytes_per_sec": 0, 00:05:31.640 "r_mbytes_per_sec": 0, 00:05:31.640 "w_mbytes_per_sec": 0 00:05:31.640 }, 00:05:31.640 "claimed": false, 00:05:31.640 "zoned": false, 00:05:31.640 "supported_io_types": { 00:05:31.640 "read": true, 00:05:31.640 "write": true, 00:05:31.640 "unmap": true, 00:05:31.640 "flush": true, 00:05:31.640 "reset": true, 00:05:31.640 "nvme_admin": false, 00:05:31.640 "nvme_io": false, 00:05:31.640 "nvme_io_md": false, 00:05:31.640 "write_zeroes": true, 00:05:31.640 "zcopy": true, 00:05:31.640 "get_zone_info": false, 00:05:31.640 "zone_management": false, 00:05:31.640 "zone_append": false, 00:05:31.640 "compare": false, 00:05:31.640 "compare_and_write": false, 00:05:31.640 "abort": true, 00:05:31.640 "seek_hole": false, 00:05:31.640 "seek_data": false, 00:05:31.640 "copy": true, 00:05:31.640 "nvme_iov_md": false 00:05:31.640 }, 00:05:31.640 "memory_domains": [ 00:05:31.640 { 00:05:31.640 "dma_device_id": "system", 00:05:31.640 "dma_device_type": 1 00:05:31.640 }, 00:05:31.640 { 00:05:31.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.640 "dma_device_type": 2 00:05:31.640 } 00:05:31.640 ], 00:05:31.640 "driver_specific": {} 00:05:31.640 } 00:05:31.640 ]' 00:05:31.640 01:06:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:31.640 01:06:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:31.640 01:06:44 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:31.640 01:06:44 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.640 01:06:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.640 01:06:44 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.640 01:06:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:31.640 01:06:44 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.640 01:06:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.640 01:06:44 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.640 01:06:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:31.640 01:06:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:31.640 01:06:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:31.640 00:05:31.640 real 0m0.155s 00:05:31.640 user 0m0.084s 00:05:31.640 sys 0m0.028s 00:05:31.640 01:06:44 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.640 01:06:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.640 ************************************ 00:05:31.640 END TEST rpc_plugins 00:05:31.640 ************************************ 00:05:31.640 01:06:44 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:31.640 01:06:44 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.640 01:06:44 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.640 01:06:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.900 ************************************ 00:05:31.901 START TEST rpc_trace_cmd_test 00:05:31.901 ************************************ 00:05:31.901 01:06:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:31.901 01:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:31.901 01:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:31.901 01:06:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.901 01:06:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:31.901 01:06:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.901 01:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:31.901 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68903", 00:05:31.901 "tpoint_group_mask": "0x8", 00:05:31.901 "iscsi_conn": { 00:05:31.901 "mask": "0x2", 00:05:31.901 "tpoint_mask": "0x0" 00:05:31.901 }, 00:05:31.901 "scsi": { 00:05:31.901 "mask": "0x4", 00:05:31.901 "tpoint_mask": "0x0" 00:05:31.901 }, 00:05:31.901 "bdev": { 00:05:31.901 "mask": "0x8", 00:05:31.901 "tpoint_mask": "0xffffffffffffffff" 00:05:31.901 }, 00:05:31.901 "nvmf_rdma": { 00:05:31.901 "mask": "0x10", 00:05:31.901 "tpoint_mask": "0x0" 00:05:31.901 }, 00:05:31.901 "nvmf_tcp": { 00:05:31.901 "mask": "0x20", 00:05:31.901 "tpoint_mask": "0x0" 00:05:31.901 }, 00:05:31.901 "ftl": { 00:05:31.901 "mask": "0x40", 00:05:31.901 "tpoint_mask": "0x0" 00:05:31.901 }, 00:05:31.901 "blobfs": { 00:05:31.901 "mask": "0x80", 00:05:31.901 "tpoint_mask": "0x0" 00:05:31.901 }, 00:05:31.901 "dsa": { 00:05:31.901 "mask": "0x200", 00:05:31.901 "tpoint_mask": "0x0" 00:05:31.901 }, 00:05:31.901 "thread": { 00:05:31.901 "mask": "0x400", 00:05:31.901 "tpoint_mask": "0x0" 00:05:31.901 }, 00:05:31.901 "nvme_pcie": { 00:05:31.901 "mask": "0x800", 00:05:31.901 "tpoint_mask": "0x0" 00:05:31.901 }, 00:05:31.901 "iaa": { 00:05:31.901 "mask": "0x1000", 00:05:31.901 "tpoint_mask": "0x0" 00:05:31.901 }, 00:05:31.901 "nvme_tcp": { 00:05:31.901 "mask": "0x2000", 00:05:31.901 "tpoint_mask": "0x0" 00:05:31.901 }, 00:05:31.901 "bdev_nvme": { 00:05:31.901 "mask": "0x4000", 00:05:31.901 "tpoint_mask": "0x0" 00:05:31.901 }, 00:05:31.901 "sock": { 00:05:31.901 "mask": "0x8000", 00:05:31.901 "tpoint_mask": "0x0" 00:05:31.901 }, 00:05:31.901 "blob": { 00:05:31.901 "mask": "0x10000", 00:05:31.901 "tpoint_mask": "0x0" 00:05:31.901 }, 00:05:31.901 "bdev_raid": { 00:05:31.901 "mask": "0x20000", 00:05:31.901 "tpoint_mask": "0x0" 00:05:31.901 }, 00:05:31.901 "scheduler": { 00:05:31.901 "mask": "0x40000", 00:05:31.901 "tpoint_mask": "0x0" 00:05:31.901 } 00:05:31.901 }' 00:05:31.901 01:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:31.901 01:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:31.901 01:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:31.901 01:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:31.901 01:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:31.901 01:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:31.901 01:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:31.901 01:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:31.901 01:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:31.901 01:06:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:31.901 00:05:31.901 real 0m0.244s 00:05:31.901 user 0m0.192s 00:05:31.901 sys 0m0.040s 00:05:31.901 01:06:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.901 01:06:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:31.901 ************************************ 00:05:31.901 END TEST rpc_trace_cmd_test 00:05:31.901 ************************************ 00:05:32.161 01:06:44 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:32.161 01:06:44 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:32.161 01:06:44 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:32.161 01:06:44 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.161 01:06:44 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.161 01:06:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.161 ************************************ 00:05:32.161 START TEST rpc_daemon_integrity 00:05:32.161 ************************************ 00:05:32.161 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:32.161 01:06:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:32.161 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.161 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.161 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.161 01:06:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:32.161 01:06:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:32.161 01:06:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:32.161 01:06:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:32.161 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.161 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.161 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.161 01:06:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:32.161 01:06:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:32.161 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.161 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.161 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.162 01:06:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:32.162 { 00:05:32.162 "name": "Malloc2", 00:05:32.162 "aliases": [ 00:05:32.162 "39dd111c-5fa2-4f41-8eef-c40601ccea3d" 00:05:32.162 ], 00:05:32.162 "product_name": "Malloc disk", 00:05:32.162 "block_size": 512, 00:05:32.162 "num_blocks": 16384, 00:05:32.162 "uuid": "39dd111c-5fa2-4f41-8eef-c40601ccea3d", 00:05:32.162 "assigned_rate_limits": { 00:05:32.162 "rw_ios_per_sec": 0, 00:05:32.162 "rw_mbytes_per_sec": 0, 00:05:32.162 "r_mbytes_per_sec": 0, 00:05:32.162 "w_mbytes_per_sec": 0 00:05:32.162 }, 00:05:32.162 "claimed": false, 00:05:32.162 "zoned": false, 00:05:32.162 "supported_io_types": { 00:05:32.162 "read": true, 00:05:32.162 "write": true, 00:05:32.162 "unmap": true, 00:05:32.162 "flush": true, 00:05:32.162 "reset": true, 00:05:32.162 "nvme_admin": false, 00:05:32.162 "nvme_io": false, 00:05:32.162 "nvme_io_md": false, 00:05:32.162 "write_zeroes": true, 00:05:32.162 "zcopy": true, 00:05:32.162 "get_zone_info": false, 00:05:32.162 "zone_management": false, 00:05:32.162 "zone_append": false, 00:05:32.162 "compare": false, 00:05:32.162 "compare_and_write": false, 00:05:32.162 "abort": true, 00:05:32.162 "seek_hole": false, 00:05:32.162 "seek_data": false, 00:05:32.162 "copy": true, 00:05:32.162 "nvme_iov_md": false 00:05:32.162 }, 00:05:32.162 "memory_domains": [ 00:05:32.162 { 00:05:32.162 "dma_device_id": "system", 00:05:32.162 "dma_device_type": 1 00:05:32.162 }, 00:05:32.162 { 00:05:32.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.162 "dma_device_type": 2 00:05:32.162 } 00:05:32.162 ], 00:05:32.162 "driver_specific": {} 00:05:32.162 } 00:05:32.162 ]' 00:05:32.162 01:06:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:32.162 01:06:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:32.162 01:06:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:32.162 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.162 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.162 [2024-10-15 01:06:44.819430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:32.162 [2024-10-15 01:06:44.819491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:32.162 [2024-10-15 01:06:44.819518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:05:32.162 [2024-10-15 01:06:44.819527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:32.162 [2024-10-15 01:06:44.821830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:32.162 [2024-10-15 01:06:44.821867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:32.162 Passthru0 00:05:32.162 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.162 01:06:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:32.162 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.162 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.162 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.162 01:06:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:32.162 { 00:05:32.162 "name": "Malloc2", 00:05:32.162 "aliases": [ 00:05:32.162 "39dd111c-5fa2-4f41-8eef-c40601ccea3d" 00:05:32.162 ], 00:05:32.162 "product_name": "Malloc disk", 00:05:32.162 "block_size": 512, 00:05:32.162 "num_blocks": 16384, 00:05:32.162 "uuid": "39dd111c-5fa2-4f41-8eef-c40601ccea3d", 00:05:32.162 "assigned_rate_limits": { 00:05:32.162 "rw_ios_per_sec": 0, 00:05:32.162 "rw_mbytes_per_sec": 0, 00:05:32.162 "r_mbytes_per_sec": 0, 00:05:32.162 "w_mbytes_per_sec": 0 00:05:32.162 }, 00:05:32.162 "claimed": true, 00:05:32.162 "claim_type": "exclusive_write", 00:05:32.162 "zoned": false, 00:05:32.162 "supported_io_types": { 00:05:32.162 "read": true, 00:05:32.162 "write": true, 00:05:32.162 "unmap": true, 00:05:32.162 "flush": true, 00:05:32.162 "reset": true, 00:05:32.162 "nvme_admin": false, 00:05:32.162 "nvme_io": false, 00:05:32.162 "nvme_io_md": false, 00:05:32.162 "write_zeroes": true, 00:05:32.162 "zcopy": true, 00:05:32.162 "get_zone_info": false, 00:05:32.162 "zone_management": false, 00:05:32.162 "zone_append": false, 00:05:32.162 "compare": false, 00:05:32.162 "compare_and_write": false, 00:05:32.162 "abort": true, 00:05:32.162 "seek_hole": false, 00:05:32.162 "seek_data": false, 00:05:32.162 "copy": true, 00:05:32.162 "nvme_iov_md": false 00:05:32.162 }, 00:05:32.162 "memory_domains": [ 00:05:32.162 { 00:05:32.162 "dma_device_id": "system", 00:05:32.162 "dma_device_type": 1 00:05:32.162 }, 00:05:32.162 { 00:05:32.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.162 "dma_device_type": 2 00:05:32.162 } 00:05:32.162 ], 00:05:32.162 "driver_specific": {} 00:05:32.162 }, 00:05:32.162 { 00:05:32.162 "name": "Passthru0", 00:05:32.162 "aliases": [ 00:05:32.162 "23a573b8-40af-502f-833b-d8389aebe5d1" 00:05:32.162 ], 00:05:32.162 "product_name": "passthru", 00:05:32.162 "block_size": 512, 00:05:32.162 "num_blocks": 16384, 00:05:32.162 "uuid": "23a573b8-40af-502f-833b-d8389aebe5d1", 00:05:32.162 "assigned_rate_limits": { 00:05:32.162 "rw_ios_per_sec": 0, 00:05:32.162 "rw_mbytes_per_sec": 0, 00:05:32.162 "r_mbytes_per_sec": 0, 00:05:32.162 "w_mbytes_per_sec": 0 00:05:32.162 }, 00:05:32.162 "claimed": false, 00:05:32.162 "zoned": false, 00:05:32.162 "supported_io_types": { 00:05:32.162 "read": true, 00:05:32.162 "write": true, 00:05:32.162 "unmap": true, 00:05:32.162 "flush": true, 00:05:32.162 "reset": true, 00:05:32.162 "nvme_admin": false, 00:05:32.162 "nvme_io": false, 00:05:32.162 "nvme_io_md": false, 00:05:32.162 "write_zeroes": true, 00:05:32.162 "zcopy": true, 00:05:32.162 "get_zone_info": false, 00:05:32.162 "zone_management": false, 00:05:32.162 "zone_append": false, 00:05:32.162 "compare": false, 00:05:32.162 "compare_and_write": false, 00:05:32.162 "abort": true, 00:05:32.162 "seek_hole": false, 00:05:32.162 "seek_data": false, 00:05:32.162 "copy": true, 00:05:32.162 "nvme_iov_md": false 00:05:32.162 }, 00:05:32.162 "memory_domains": [ 00:05:32.162 { 00:05:32.162 "dma_device_id": "system", 00:05:32.162 "dma_device_type": 1 00:05:32.162 }, 00:05:32.162 { 00:05:32.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.162 "dma_device_type": 2 00:05:32.162 } 00:05:32.162 ], 00:05:32.162 "driver_specific": { 00:05:32.162 "passthru": { 00:05:32.162 "name": "Passthru0", 00:05:32.162 "base_bdev_name": "Malloc2" 00:05:32.162 } 00:05:32.162 } 00:05:32.162 } 00:05:32.162 ]' 00:05:32.162 01:06:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:32.423 01:06:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:32.423 01:06:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:32.423 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.423 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.423 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.423 01:06:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:32.423 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.423 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.423 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.423 01:06:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:32.423 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.423 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.423 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.423 01:06:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:32.423 01:06:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:32.423 01:06:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:32.423 00:05:32.423 real 0m0.313s 00:05:32.423 user 0m0.192s 00:05:32.423 sys 0m0.052s 00:05:32.423 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.423 01:06:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.423 ************************************ 00:05:32.423 END TEST rpc_daemon_integrity 00:05:32.423 ************************************ 00:05:32.423 01:06:45 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:32.423 01:06:45 rpc -- rpc/rpc.sh@84 -- # killprocess 68903 00:05:32.423 01:06:45 rpc -- common/autotest_common.sh@950 -- # '[' -z 68903 ']' 00:05:32.423 01:06:45 rpc -- common/autotest_common.sh@954 -- # kill -0 68903 00:05:32.423 01:06:45 rpc -- common/autotest_common.sh@955 -- # uname 00:05:32.423 01:06:45 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.423 01:06:45 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68903 00:05:32.423 01:06:45 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.423 01:06:45 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.423 01:06:45 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68903' 00:05:32.423 killing process with pid 68903 00:05:32.423 01:06:45 rpc -- common/autotest_common.sh@969 -- # kill 68903 00:05:32.423 01:06:45 rpc -- common/autotest_common.sh@974 -- # wait 68903 00:05:32.992 00:05:32.992 real 0m2.802s 00:05:32.992 user 0m3.361s 00:05:32.992 sys 0m0.844s 00:05:32.992 01:06:45 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.992 01:06:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.992 ************************************ 00:05:32.992 END TEST rpc 00:05:32.992 ************************************ 00:05:32.992 01:06:45 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:32.992 01:06:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.992 01:06:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.992 01:06:45 -- common/autotest_common.sh@10 -- # set +x 00:05:32.992 ************************************ 00:05:32.992 START TEST skip_rpc 00:05:32.992 ************************************ 00:05:32.992 01:06:45 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:32.992 * Looking for test storage... 00:05:32.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:32.992 01:06:45 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:32.992 01:06:45 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:32.993 01:06:45 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:32.993 01:06:45 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:32.993 01:06:45 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.993 01:06:45 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.993 01:06:45 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.993 01:06:45 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.993 01:06:45 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.993 01:06:45 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.993 01:06:45 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.993 01:06:45 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.993 01:06:45 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.993 01:06:45 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.993 01:06:45 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.993 01:06:45 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:32.993 01:06:45 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:32.993 01:06:45 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.993 01:06:45 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.993 01:06:45 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:32.993 01:06:45 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:32.993 01:06:45 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.993 01:06:45 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:32.993 01:06:45 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.993 01:06:45 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:33.252 01:06:45 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:33.252 01:06:45 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.252 01:06:45 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:33.252 01:06:45 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.252 01:06:45 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.252 01:06:45 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.252 01:06:45 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:33.252 01:06:45 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.252 01:06:45 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:33.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.252 --rc genhtml_branch_coverage=1 00:05:33.253 --rc genhtml_function_coverage=1 00:05:33.253 --rc genhtml_legend=1 00:05:33.253 --rc geninfo_all_blocks=1 00:05:33.253 --rc geninfo_unexecuted_blocks=1 00:05:33.253 00:05:33.253 ' 00:05:33.253 01:06:45 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:33.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.253 --rc genhtml_branch_coverage=1 00:05:33.253 --rc genhtml_function_coverage=1 00:05:33.253 --rc genhtml_legend=1 00:05:33.253 --rc geninfo_all_blocks=1 00:05:33.253 --rc geninfo_unexecuted_blocks=1 00:05:33.253 00:05:33.253 ' 00:05:33.253 01:06:45 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:33.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.253 --rc genhtml_branch_coverage=1 00:05:33.253 --rc genhtml_function_coverage=1 00:05:33.253 --rc genhtml_legend=1 00:05:33.253 --rc geninfo_all_blocks=1 00:05:33.253 --rc geninfo_unexecuted_blocks=1 00:05:33.253 00:05:33.253 ' 00:05:33.253 01:06:45 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:33.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.253 --rc genhtml_branch_coverage=1 00:05:33.253 --rc genhtml_function_coverage=1 00:05:33.253 --rc genhtml_legend=1 00:05:33.253 --rc geninfo_all_blocks=1 00:05:33.253 --rc geninfo_unexecuted_blocks=1 00:05:33.253 00:05:33.253 ' 00:05:33.253 01:06:45 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:33.253 01:06:45 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:33.253 01:06:45 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:33.253 01:06:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.253 01:06:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.253 01:06:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.253 ************************************ 00:05:33.253 START TEST skip_rpc 00:05:33.253 ************************************ 00:05:33.253 01:06:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:33.253 01:06:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69105 00:05:33.253 01:06:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:33.253 01:06:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.253 01:06:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:33.253 [2024-10-15 01:06:45.826873] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:05:33.253 [2024-10-15 01:06:45.827458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69105 ] 00:05:33.253 [2024-10-15 01:06:45.970805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.513 [2024-10-15 01:06:45.997947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69105 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69105 ']' 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69105 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69105 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:38.790 killing process with pid 69105 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69105' 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69105 00:05:38.790 01:06:50 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69105 00:05:38.790 00:05:38.790 real 0m5.419s 00:05:38.790 user 0m5.040s 00:05:38.790 sys 0m0.308s 00:05:38.790 01:06:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.790 01:06:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.790 ************************************ 00:05:38.790 END TEST skip_rpc 00:05:38.790 ************************************ 00:05:38.790 01:06:51 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:38.790 01:06:51 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.790 01:06:51 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.790 01:06:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.790 ************************************ 00:05:38.790 START TEST skip_rpc_with_json 00:05:38.790 ************************************ 00:05:38.790 01:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:38.790 01:06:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:38.790 01:06:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69192 00:05:38.790 01:06:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.790 01:06:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.790 01:06:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69192 00:05:38.790 01:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69192 ']' 00:05:38.790 01:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.790 01:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.790 01:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.790 01:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.790 01:06:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.790 [2024-10-15 01:06:51.304961] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:05:38.790 [2024-10-15 01:06:51.305472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69192 ] 00:05:38.790 [2024-10-15 01:06:51.430568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.790 [2024-10-15 01:06:51.456887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.731 01:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.731 01:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:39.731 01:06:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:39.731 01:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.731 01:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.731 [2024-10-15 01:06:52.123032] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:39.731 request: 00:05:39.731 { 00:05:39.731 "trtype": "tcp", 00:05:39.731 "method": "nvmf_get_transports", 00:05:39.731 "req_id": 1 00:05:39.731 } 00:05:39.731 Got JSON-RPC error response 00:05:39.731 response: 00:05:39.731 { 00:05:39.731 "code": -19, 00:05:39.731 "message": "No such device" 00:05:39.731 } 00:05:39.731 01:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:39.731 01:06:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:39.731 01:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.731 01:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.731 [2024-10-15 01:06:52.135143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:39.731 01:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.731 01:06:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:39.731 01:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.731 01:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.731 01:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.731 01:06:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:39.731 { 00:05:39.731 "subsystems": [ 00:05:39.731 { 00:05:39.731 "subsystem": "fsdev", 00:05:39.731 "config": [ 00:05:39.731 { 00:05:39.731 "method": "fsdev_set_opts", 00:05:39.731 "params": { 00:05:39.731 "fsdev_io_pool_size": 65535, 00:05:39.731 "fsdev_io_cache_size": 256 00:05:39.731 } 00:05:39.731 } 00:05:39.731 ] 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "subsystem": "keyring", 00:05:39.731 "config": [] 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "subsystem": "iobuf", 00:05:39.731 "config": [ 00:05:39.731 { 00:05:39.731 "method": "iobuf_set_options", 00:05:39.731 "params": { 00:05:39.731 "small_pool_count": 8192, 00:05:39.731 "large_pool_count": 1024, 00:05:39.731 "small_bufsize": 8192, 00:05:39.731 "large_bufsize": 135168 00:05:39.731 } 00:05:39.731 } 00:05:39.731 ] 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "subsystem": "sock", 00:05:39.731 "config": [ 00:05:39.731 { 00:05:39.731 "method": "sock_set_default_impl", 00:05:39.731 "params": { 00:05:39.731 "impl_name": "posix" 00:05:39.731 } 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "method": "sock_impl_set_options", 00:05:39.731 "params": { 00:05:39.731 "impl_name": "ssl", 00:05:39.731 "recv_buf_size": 4096, 00:05:39.731 "send_buf_size": 4096, 00:05:39.731 "enable_recv_pipe": true, 00:05:39.731 "enable_quickack": false, 00:05:39.731 "enable_placement_id": 0, 00:05:39.731 "enable_zerocopy_send_server": true, 00:05:39.731 "enable_zerocopy_send_client": false, 00:05:39.731 "zerocopy_threshold": 0, 00:05:39.731 "tls_version": 0, 00:05:39.731 "enable_ktls": false 00:05:39.731 } 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "method": "sock_impl_set_options", 00:05:39.731 "params": { 00:05:39.731 "impl_name": "posix", 00:05:39.731 "recv_buf_size": 2097152, 00:05:39.731 "send_buf_size": 2097152, 00:05:39.731 "enable_recv_pipe": true, 00:05:39.731 "enable_quickack": false, 00:05:39.731 "enable_placement_id": 0, 00:05:39.731 "enable_zerocopy_send_server": true, 00:05:39.731 "enable_zerocopy_send_client": false, 00:05:39.731 "zerocopy_threshold": 0, 00:05:39.731 "tls_version": 0, 00:05:39.731 "enable_ktls": false 00:05:39.731 } 00:05:39.731 } 00:05:39.731 ] 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "subsystem": "vmd", 00:05:39.731 "config": [] 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "subsystem": "accel", 00:05:39.731 "config": [ 00:05:39.731 { 00:05:39.731 "method": "accel_set_options", 00:05:39.731 "params": { 00:05:39.731 "small_cache_size": 128, 00:05:39.731 "large_cache_size": 16, 00:05:39.731 "task_count": 2048, 00:05:39.731 "sequence_count": 2048, 00:05:39.731 "buf_count": 2048 00:05:39.731 } 00:05:39.731 } 00:05:39.731 ] 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "subsystem": "bdev", 00:05:39.731 "config": [ 00:05:39.731 { 00:05:39.731 "method": "bdev_set_options", 00:05:39.731 "params": { 00:05:39.731 "bdev_io_pool_size": 65535, 00:05:39.731 "bdev_io_cache_size": 256, 00:05:39.731 "bdev_auto_examine": true, 00:05:39.731 "iobuf_small_cache_size": 128, 00:05:39.731 "iobuf_large_cache_size": 16 00:05:39.731 } 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "method": "bdev_raid_set_options", 00:05:39.731 "params": { 00:05:39.731 "process_window_size_kb": 1024, 00:05:39.731 "process_max_bandwidth_mb_sec": 0 00:05:39.731 } 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "method": "bdev_iscsi_set_options", 00:05:39.731 "params": { 00:05:39.731 "timeout_sec": 30 00:05:39.731 } 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "method": "bdev_nvme_set_options", 00:05:39.731 "params": { 00:05:39.731 "action_on_timeout": "none", 00:05:39.731 "timeout_us": 0, 00:05:39.731 "timeout_admin_us": 0, 00:05:39.731 "keep_alive_timeout_ms": 10000, 00:05:39.731 "arbitration_burst": 0, 00:05:39.731 "low_priority_weight": 0, 00:05:39.731 "medium_priority_weight": 0, 00:05:39.731 "high_priority_weight": 0, 00:05:39.731 "nvme_adminq_poll_period_us": 10000, 00:05:39.731 "nvme_ioq_poll_period_us": 0, 00:05:39.731 "io_queue_requests": 0, 00:05:39.731 "delay_cmd_submit": true, 00:05:39.731 "transport_retry_count": 4, 00:05:39.731 "bdev_retry_count": 3, 00:05:39.731 "transport_ack_timeout": 0, 00:05:39.731 "ctrlr_loss_timeout_sec": 0, 00:05:39.731 "reconnect_delay_sec": 0, 00:05:39.731 "fast_io_fail_timeout_sec": 0, 00:05:39.731 "disable_auto_failback": false, 00:05:39.731 "generate_uuids": false, 00:05:39.731 "transport_tos": 0, 00:05:39.731 "nvme_error_stat": false, 00:05:39.731 "rdma_srq_size": 0, 00:05:39.731 "io_path_stat": false, 00:05:39.731 "allow_accel_sequence": false, 00:05:39.731 "rdma_max_cq_size": 0, 00:05:39.731 "rdma_cm_event_timeout_ms": 0, 00:05:39.731 "dhchap_digests": [ 00:05:39.731 "sha256", 00:05:39.731 "sha384", 00:05:39.731 "sha512" 00:05:39.731 ], 00:05:39.731 "dhchap_dhgroups": [ 00:05:39.731 "null", 00:05:39.731 "ffdhe2048", 00:05:39.731 "ffdhe3072", 00:05:39.731 "ffdhe4096", 00:05:39.731 "ffdhe6144", 00:05:39.731 "ffdhe8192" 00:05:39.731 ] 00:05:39.731 } 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "method": "bdev_nvme_set_hotplug", 00:05:39.731 "params": { 00:05:39.731 "period_us": 100000, 00:05:39.731 "enable": false 00:05:39.731 } 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "method": "bdev_wait_for_examine" 00:05:39.731 } 00:05:39.731 ] 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "subsystem": "scsi", 00:05:39.731 "config": null 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "subsystem": "scheduler", 00:05:39.731 "config": [ 00:05:39.731 { 00:05:39.731 "method": "framework_set_scheduler", 00:05:39.731 "params": { 00:05:39.731 "name": "static" 00:05:39.731 } 00:05:39.731 } 00:05:39.731 ] 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "subsystem": "vhost_scsi", 00:05:39.731 "config": [] 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "subsystem": "vhost_blk", 00:05:39.731 "config": [] 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "subsystem": "ublk", 00:05:39.731 "config": [] 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "subsystem": "nbd", 00:05:39.731 "config": [] 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "subsystem": "nvmf", 00:05:39.731 "config": [ 00:05:39.731 { 00:05:39.731 "method": "nvmf_set_config", 00:05:39.731 "params": { 00:05:39.731 "discovery_filter": "match_any", 00:05:39.731 "admin_cmd_passthru": { 00:05:39.731 "identify_ctrlr": false 00:05:39.731 }, 00:05:39.731 "dhchap_digests": [ 00:05:39.731 "sha256", 00:05:39.731 "sha384", 00:05:39.731 "sha512" 00:05:39.731 ], 00:05:39.731 "dhchap_dhgroups": [ 00:05:39.731 "null", 00:05:39.731 "ffdhe2048", 00:05:39.731 "ffdhe3072", 00:05:39.731 "ffdhe4096", 00:05:39.731 "ffdhe6144", 00:05:39.731 "ffdhe8192" 00:05:39.731 ] 00:05:39.731 } 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "method": "nvmf_set_max_subsystems", 00:05:39.731 "params": { 00:05:39.731 "max_subsystems": 1024 00:05:39.731 } 00:05:39.731 }, 00:05:39.731 { 00:05:39.731 "method": "nvmf_set_crdt", 00:05:39.731 "params": { 00:05:39.731 "crdt1": 0, 00:05:39.731 "crdt2": 0, 00:05:39.732 "crdt3": 0 00:05:39.732 } 00:05:39.732 }, 00:05:39.732 { 00:05:39.732 "method": "nvmf_create_transport", 00:05:39.732 "params": { 00:05:39.732 "trtype": "TCP", 00:05:39.732 "max_queue_depth": 128, 00:05:39.732 "max_io_qpairs_per_ctrlr": 127, 00:05:39.732 "in_capsule_data_size": 4096, 00:05:39.732 "max_io_size": 131072, 00:05:39.732 "io_unit_size": 131072, 00:05:39.732 "max_aq_depth": 128, 00:05:39.732 "num_shared_buffers": 511, 00:05:39.732 "buf_cache_size": 4294967295, 00:05:39.732 "dif_insert_or_strip": false, 00:05:39.732 "zcopy": false, 00:05:39.732 "c2h_success": true, 00:05:39.732 "sock_priority": 0, 00:05:39.732 "abort_timeout_sec": 1, 00:05:39.732 "ack_timeout": 0, 00:05:39.732 "data_wr_pool_size": 0 00:05:39.732 } 00:05:39.732 } 00:05:39.732 ] 00:05:39.732 }, 00:05:39.732 { 00:05:39.732 "subsystem": "iscsi", 00:05:39.732 "config": [ 00:05:39.732 { 00:05:39.732 "method": "iscsi_set_options", 00:05:39.732 "params": { 00:05:39.732 "node_base": "iqn.2016-06.io.spdk", 00:05:39.732 "max_sessions": 128, 00:05:39.732 "max_connections_per_session": 2, 00:05:39.732 "max_queue_depth": 64, 00:05:39.732 "default_time2wait": 2, 00:05:39.732 "default_time2retain": 20, 00:05:39.732 "first_burst_length": 8192, 00:05:39.732 "immediate_data": true, 00:05:39.732 "allow_duplicated_isid": false, 00:05:39.732 "error_recovery_level": 0, 00:05:39.732 "nop_timeout": 60, 00:05:39.732 "nop_in_interval": 30, 00:05:39.732 "disable_chap": false, 00:05:39.732 "require_chap": false, 00:05:39.732 "mutual_chap": false, 00:05:39.732 "chap_group": 0, 00:05:39.732 "max_large_datain_per_connection": 64, 00:05:39.732 "max_r2t_per_connection": 4, 00:05:39.732 "pdu_pool_size": 36864, 00:05:39.732 "immediate_data_pool_size": 16384, 00:05:39.732 "data_out_pool_size": 2048 00:05:39.732 } 00:05:39.732 } 00:05:39.732 ] 00:05:39.732 } 00:05:39.732 ] 00:05:39.732 } 00:05:39.732 01:06:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:39.732 01:06:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69192 00:05:39.732 01:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69192 ']' 00:05:39.732 01:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69192 00:05:39.732 01:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:39.732 01:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.732 01:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69192 00:05:39.732 01:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.732 01:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.732 killing process with pid 69192 00:05:39.732 01:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69192' 00:05:39.732 01:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69192 00:05:39.732 01:06:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69192 00:05:39.993 01:06:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69221 00:05:39.993 01:06:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:39.993 01:06:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:45.329 01:06:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69221 00:05:45.329 01:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69221 ']' 00:05:45.329 01:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69221 00:05:45.329 01:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:45.329 01:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.329 01:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69221 00:05:45.329 01:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.329 01:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.329 killing process with pid 69221 00:05:45.329 01:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69221' 00:05:45.329 01:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69221 00:05:45.329 01:06:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69221 00:05:45.589 01:06:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:45.589 01:06:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:45.589 00:05:45.589 real 0m6.918s 00:05:45.589 user 0m6.505s 00:05:45.589 sys 0m0.693s 00:05:45.589 01:06:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.589 01:06:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:45.589 ************************************ 00:05:45.589 END TEST skip_rpc_with_json 00:05:45.589 ************************************ 00:05:45.589 01:06:58 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:45.589 01:06:58 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.589 01:06:58 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.589 01:06:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.589 ************************************ 00:05:45.589 START TEST skip_rpc_with_delay 00:05:45.589 ************************************ 00:05:45.590 01:06:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:45.590 01:06:58 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:45.590 01:06:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:45.590 01:06:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:45.590 01:06:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.590 01:06:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.590 01:06:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.590 01:06:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.590 01:06:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.590 01:06:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.590 01:06:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.590 01:06:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:45.590 01:06:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:45.590 [2024-10-15 01:06:58.288438] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:45.849 01:06:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:45.850 01:06:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:45.850 01:06:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:45.850 01:06:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:45.850 00:05:45.850 real 0m0.152s 00:05:45.850 user 0m0.080s 00:05:45.850 sys 0m0.071s 00:05:45.850 01:06:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.850 01:06:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:45.850 ************************************ 00:05:45.850 END TEST skip_rpc_with_delay 00:05:45.850 ************************************ 00:05:45.850 01:06:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:45.850 01:06:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:45.850 01:06:58 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:45.850 01:06:58 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.850 01:06:58 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.850 01:06:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.850 ************************************ 00:05:45.850 START TEST exit_on_failed_rpc_init 00:05:45.850 ************************************ 00:05:45.850 01:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:45.850 01:06:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69327 00:05:45.850 01:06:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.850 01:06:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69327 00:05:45.850 01:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69327 ']' 00:05:45.850 01:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.850 01:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.850 01:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.850 01:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.850 01:06:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:45.850 [2024-10-15 01:06:58.514576] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:05:45.850 [2024-10-15 01:06:58.514691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69327 ] 00:05:46.110 [2024-10-15 01:06:58.659150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.110 [2024-10-15 01:06:58.685450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.678 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.678 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:46.678 01:06:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.678 01:06:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:46.678 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:46.678 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:46.678 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.678 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.678 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.678 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.678 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.678 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.678 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.678 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:46.678 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:46.937 [2024-10-15 01:06:59.405494] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:05:46.937 [2024-10-15 01:06:59.405628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69345 ] 00:05:46.937 [2024-10-15 01:06:59.548516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.937 [2024-10-15 01:06:59.577382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.937 [2024-10-15 01:06:59.577463] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:46.937 [2024-10-15 01:06:59.577484] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:46.937 [2024-10-15 01:06:59.577495] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:47.197 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:47.197 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.197 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:47.197 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:47.197 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:47.197 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.197 01:06:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:47.197 01:06:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69327 00:05:47.197 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69327 ']' 00:05:47.197 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69327 00:05:47.197 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:47.197 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.197 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69327 00:05:47.197 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.197 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.197 killing process with pid 69327 00:05:47.197 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69327' 00:05:47.197 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69327 00:05:47.197 01:06:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69327 00:05:47.457 00:05:47.457 real 0m1.653s 00:05:47.457 user 0m1.751s 00:05:47.457 sys 0m0.460s 00:05:47.457 01:07:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.457 01:07:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:47.457 ************************************ 00:05:47.457 END TEST exit_on_failed_rpc_init 00:05:47.457 ************************************ 00:05:47.457 01:07:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:47.457 00:05:47.457 real 0m14.627s 00:05:47.457 user 0m13.599s 00:05:47.457 sys 0m1.816s 00:05:47.457 01:07:00 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.457 01:07:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.457 ************************************ 00:05:47.457 END TEST skip_rpc 00:05:47.457 ************************************ 00:05:47.717 01:07:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:47.717 01:07:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.717 01:07:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.717 01:07:00 -- common/autotest_common.sh@10 -- # set +x 00:05:47.717 ************************************ 00:05:47.717 START TEST rpc_client 00:05:47.717 ************************************ 00:05:47.717 01:07:00 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:47.717 * Looking for test storage... 00:05:47.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:47.717 01:07:00 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:47.717 01:07:00 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:47.717 01:07:00 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:47.717 01:07:00 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.717 01:07:00 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:47.717 01:07:00 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.717 01:07:00 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:47.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.717 --rc genhtml_branch_coverage=1 00:05:47.717 --rc genhtml_function_coverage=1 00:05:47.717 --rc genhtml_legend=1 00:05:47.717 --rc geninfo_all_blocks=1 00:05:47.717 --rc geninfo_unexecuted_blocks=1 00:05:47.717 00:05:47.717 ' 00:05:47.717 01:07:00 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:47.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.717 --rc genhtml_branch_coverage=1 00:05:47.717 --rc genhtml_function_coverage=1 00:05:47.717 --rc genhtml_legend=1 00:05:47.717 --rc geninfo_all_blocks=1 00:05:47.717 --rc geninfo_unexecuted_blocks=1 00:05:47.717 00:05:47.717 ' 00:05:47.717 01:07:00 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:47.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.717 --rc genhtml_branch_coverage=1 00:05:47.717 --rc genhtml_function_coverage=1 00:05:47.717 --rc genhtml_legend=1 00:05:47.717 --rc geninfo_all_blocks=1 00:05:47.717 --rc geninfo_unexecuted_blocks=1 00:05:47.717 00:05:47.717 ' 00:05:47.717 01:07:00 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:47.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.717 --rc genhtml_branch_coverage=1 00:05:47.717 --rc genhtml_function_coverage=1 00:05:47.717 --rc genhtml_legend=1 00:05:47.717 --rc geninfo_all_blocks=1 00:05:47.717 --rc geninfo_unexecuted_blocks=1 00:05:47.717 00:05:47.717 ' 00:05:47.717 01:07:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:47.717 OK 00:05:47.977 01:07:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:47.977 00:05:47.977 real 0m0.274s 00:05:47.977 user 0m0.143s 00:05:47.977 sys 0m0.150s 00:05:47.977 01:07:00 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.977 01:07:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:47.977 ************************************ 00:05:47.977 END TEST rpc_client 00:05:47.977 ************************************ 00:05:47.977 01:07:00 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:47.977 01:07:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.977 01:07:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.977 01:07:00 -- common/autotest_common.sh@10 -- # set +x 00:05:47.977 ************************************ 00:05:47.977 START TEST json_config 00:05:47.977 ************************************ 00:05:47.977 01:07:00 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:47.977 01:07:00 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:47.977 01:07:00 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:47.977 01:07:00 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:47.977 01:07:00 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:47.977 01:07:00 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.977 01:07:00 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.977 01:07:00 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.977 01:07:00 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.977 01:07:00 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.978 01:07:00 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.978 01:07:00 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.978 01:07:00 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.978 01:07:00 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.978 01:07:00 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.978 01:07:00 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.978 01:07:00 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:47.978 01:07:00 json_config -- scripts/common.sh@345 -- # : 1 00:05:47.978 01:07:00 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.978 01:07:00 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.237 01:07:00 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:48.237 01:07:00 json_config -- scripts/common.sh@353 -- # local d=1 00:05:48.237 01:07:00 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.237 01:07:00 json_config -- scripts/common.sh@355 -- # echo 1 00:05:48.237 01:07:00 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.237 01:07:00 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:48.237 01:07:00 json_config -- scripts/common.sh@353 -- # local d=2 00:05:48.237 01:07:00 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.237 01:07:00 json_config -- scripts/common.sh@355 -- # echo 2 00:05:48.237 01:07:00 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.237 01:07:00 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.237 01:07:00 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.237 01:07:00 json_config -- scripts/common.sh@368 -- # return 0 00:05:48.237 01:07:00 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.237 01:07:00 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:48.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.237 --rc genhtml_branch_coverage=1 00:05:48.237 --rc genhtml_function_coverage=1 00:05:48.237 --rc genhtml_legend=1 00:05:48.237 --rc geninfo_all_blocks=1 00:05:48.237 --rc geninfo_unexecuted_blocks=1 00:05:48.237 00:05:48.237 ' 00:05:48.237 01:07:00 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:48.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.237 --rc genhtml_branch_coverage=1 00:05:48.237 --rc genhtml_function_coverage=1 00:05:48.237 --rc genhtml_legend=1 00:05:48.237 --rc geninfo_all_blocks=1 00:05:48.237 --rc geninfo_unexecuted_blocks=1 00:05:48.237 00:05:48.237 ' 00:05:48.237 01:07:00 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:48.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.237 --rc genhtml_branch_coverage=1 00:05:48.237 --rc genhtml_function_coverage=1 00:05:48.237 --rc genhtml_legend=1 00:05:48.237 --rc geninfo_all_blocks=1 00:05:48.237 --rc geninfo_unexecuted_blocks=1 00:05:48.237 00:05:48.237 ' 00:05:48.237 01:07:00 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:48.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.237 --rc genhtml_branch_coverage=1 00:05:48.237 --rc genhtml_function_coverage=1 00:05:48.237 --rc genhtml_legend=1 00:05:48.237 --rc geninfo_all_blocks=1 00:05:48.237 --rc geninfo_unexecuted_blocks=1 00:05:48.237 00:05:48.237 ' 00:05:48.238 01:07:00 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43d29277-c62c-4be4-9b98-829e479f1691 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=43d29277-c62c-4be4-9b98-829e479f1691 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:48.238 01:07:00 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:48.238 01:07:00 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:48.238 01:07:00 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:48.238 01:07:00 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:48.238 01:07:00 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.238 01:07:00 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.238 01:07:00 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.238 01:07:00 json_config -- paths/export.sh@5 -- # export PATH 00:05:48.238 01:07:00 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@51 -- # : 0 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:48.238 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:48.238 01:07:00 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:48.238 01:07:00 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:48.238 WARNING: No tests are enabled so not running JSON configuration tests 00:05:48.238 01:07:00 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:48.238 01:07:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:48.238 01:07:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:48.238 01:07:00 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:48.238 01:07:00 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:48.238 01:07:00 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:48.238 ************************************ 00:05:48.238 END TEST json_config 00:05:48.238 ************************************ 00:05:48.238 00:05:48.238 real 0m0.227s 00:05:48.238 user 0m0.142s 00:05:48.238 sys 0m0.091s 00:05:48.238 01:07:00 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.238 01:07:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.238 01:07:00 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:48.238 01:07:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.238 01:07:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.238 01:07:00 -- common/autotest_common.sh@10 -- # set +x 00:05:48.238 ************************************ 00:05:48.238 START TEST json_config_extra_key 00:05:48.238 ************************************ 00:05:48.238 01:07:00 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:48.238 01:07:00 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:48.238 01:07:00 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:48.238 01:07:00 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:48.498 01:07:00 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:48.498 01:07:00 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.498 01:07:00 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.498 01:07:00 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.498 01:07:00 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.498 01:07:00 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.498 01:07:00 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.498 01:07:00 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.498 01:07:00 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.498 01:07:00 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.499 01:07:00 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.499 01:07:00 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.499 01:07:00 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:48.499 01:07:00 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:48.499 01:07:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.499 01:07:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.499 01:07:00 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:48.499 01:07:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:48.499 01:07:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.499 01:07:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:48.499 01:07:00 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.499 01:07:00 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:48.499 01:07:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:48.499 01:07:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.499 01:07:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:48.499 01:07:01 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.499 01:07:01 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.499 01:07:01 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.499 01:07:01 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:48.499 01:07:01 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.499 01:07:01 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:48.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.499 --rc genhtml_branch_coverage=1 00:05:48.499 --rc genhtml_function_coverage=1 00:05:48.499 --rc genhtml_legend=1 00:05:48.499 --rc geninfo_all_blocks=1 00:05:48.499 --rc geninfo_unexecuted_blocks=1 00:05:48.499 00:05:48.499 ' 00:05:48.499 01:07:01 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:48.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.499 --rc genhtml_branch_coverage=1 00:05:48.499 --rc genhtml_function_coverage=1 00:05:48.499 --rc genhtml_legend=1 00:05:48.499 --rc geninfo_all_blocks=1 00:05:48.499 --rc geninfo_unexecuted_blocks=1 00:05:48.499 00:05:48.499 ' 00:05:48.499 01:07:01 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:48.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.499 --rc genhtml_branch_coverage=1 00:05:48.499 --rc genhtml_function_coverage=1 00:05:48.499 --rc genhtml_legend=1 00:05:48.499 --rc geninfo_all_blocks=1 00:05:48.499 --rc geninfo_unexecuted_blocks=1 00:05:48.499 00:05:48.499 ' 00:05:48.499 01:07:01 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:48.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.499 --rc genhtml_branch_coverage=1 00:05:48.499 --rc genhtml_function_coverage=1 00:05:48.499 --rc genhtml_legend=1 00:05:48.499 --rc geninfo_all_blocks=1 00:05:48.499 --rc geninfo_unexecuted_blocks=1 00:05:48.499 00:05:48.499 ' 00:05:48.499 01:07:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:43d29277-c62c-4be4-9b98-829e479f1691 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=43d29277-c62c-4be4-9b98-829e479f1691 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:48.499 01:07:01 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:48.499 01:07:01 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:48.499 01:07:01 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:48.499 01:07:01 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:48.499 01:07:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.499 01:07:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.499 01:07:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.499 01:07:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:48.499 01:07:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:48.499 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:48.499 01:07:01 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:48.499 01:07:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:48.499 01:07:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:48.499 01:07:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:48.499 01:07:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:48.499 01:07:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:48.499 01:07:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:48.499 01:07:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:48.499 01:07:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:48.499 01:07:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:48.499 01:07:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:48.499 01:07:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:48.499 INFO: launching applications... 00:05:48.499 01:07:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:48.499 01:07:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:48.499 01:07:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:48.499 01:07:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:48.499 01:07:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:48.499 01:07:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:48.499 01:07:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:48.499 01:07:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:48.499 01:07:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69533 00:05:48.499 01:07:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:48.499 Waiting for target to run... 00:05:48.499 01:07:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69533 /var/tmp/spdk_tgt.sock 00:05:48.499 01:07:01 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:48.499 01:07:01 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69533 ']' 00:05:48.499 01:07:01 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:48.499 01:07:01 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.499 01:07:01 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:48.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:48.499 01:07:01 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.499 01:07:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:48.499 [2024-10-15 01:07:01.143396] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:05:48.499 [2024-10-15 01:07:01.143598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69533 ] 00:05:49.069 [2024-10-15 01:07:01.497731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.069 [2024-10-15 01:07:01.517674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.329 00:05:49.329 INFO: shutting down applications... 00:05:49.329 01:07:01 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.329 01:07:01 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:49.329 01:07:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:49.329 01:07:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:49.329 01:07:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:49.329 01:07:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:49.329 01:07:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:49.329 01:07:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69533 ]] 00:05:49.329 01:07:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69533 00:05:49.329 01:07:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:49.329 01:07:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:49.329 01:07:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69533 00:05:49.329 01:07:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:49.898 01:07:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:49.898 01:07:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:49.898 01:07:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69533 00:05:49.898 01:07:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:49.898 01:07:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:49.898 01:07:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:49.898 01:07:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:49.898 SPDK target shutdown done 00:05:49.898 01:07:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:49.898 Success 00:05:49.898 00:05:49.898 real 0m1.642s 00:05:49.898 user 0m1.339s 00:05:49.898 sys 0m0.467s 00:05:49.898 01:07:02 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.899 01:07:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:49.899 ************************************ 00:05:49.899 END TEST json_config_extra_key 00:05:49.899 ************************************ 00:05:49.899 01:07:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:49.899 01:07:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.899 01:07:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.899 01:07:02 -- common/autotest_common.sh@10 -- # set +x 00:05:49.899 ************************************ 00:05:49.899 START TEST alias_rpc 00:05:49.899 ************************************ 00:05:49.899 01:07:02 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:50.158 * Looking for test storage... 00:05:50.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:50.158 01:07:02 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:50.158 01:07:02 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:50.158 01:07:02 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:50.158 01:07:02 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:50.158 01:07:02 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.158 01:07:02 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.158 01:07:02 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.158 01:07:02 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.158 01:07:02 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.158 01:07:02 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.158 01:07:02 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.158 01:07:02 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.159 01:07:02 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.159 01:07:02 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.159 01:07:02 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.159 01:07:02 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:50.159 01:07:02 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:50.159 01:07:02 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.159 01:07:02 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.159 01:07:02 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:50.159 01:07:02 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:50.159 01:07:02 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.159 01:07:02 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:50.159 01:07:02 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.159 01:07:02 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:50.159 01:07:02 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:50.159 01:07:02 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.159 01:07:02 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:50.159 01:07:02 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.159 01:07:02 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.159 01:07:02 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.159 01:07:02 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:50.159 01:07:02 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.159 01:07:02 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:50.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.159 --rc genhtml_branch_coverage=1 00:05:50.159 --rc genhtml_function_coverage=1 00:05:50.159 --rc genhtml_legend=1 00:05:50.159 --rc geninfo_all_blocks=1 00:05:50.159 --rc geninfo_unexecuted_blocks=1 00:05:50.159 00:05:50.159 ' 00:05:50.159 01:07:02 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:50.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.159 --rc genhtml_branch_coverage=1 00:05:50.159 --rc genhtml_function_coverage=1 00:05:50.159 --rc genhtml_legend=1 00:05:50.159 --rc geninfo_all_blocks=1 00:05:50.159 --rc geninfo_unexecuted_blocks=1 00:05:50.159 00:05:50.159 ' 00:05:50.159 01:07:02 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:50.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.159 --rc genhtml_branch_coverage=1 00:05:50.159 --rc genhtml_function_coverage=1 00:05:50.159 --rc genhtml_legend=1 00:05:50.159 --rc geninfo_all_blocks=1 00:05:50.159 --rc geninfo_unexecuted_blocks=1 00:05:50.159 00:05:50.159 ' 00:05:50.159 01:07:02 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:50.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.159 --rc genhtml_branch_coverage=1 00:05:50.159 --rc genhtml_function_coverage=1 00:05:50.159 --rc genhtml_legend=1 00:05:50.159 --rc geninfo_all_blocks=1 00:05:50.159 --rc geninfo_unexecuted_blocks=1 00:05:50.159 00:05:50.159 ' 00:05:50.159 01:07:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:50.159 01:07:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69612 00:05:50.159 01:07:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:50.159 01:07:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69612 00:05:50.159 01:07:02 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 69612 ']' 00:05:50.159 01:07:02 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.159 01:07:02 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.159 01:07:02 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.159 01:07:02 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.159 01:07:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.159 [2024-10-15 01:07:02.833698] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:05:50.159 [2024-10-15 01:07:02.834247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69612 ] 00:05:50.419 [2024-10-15 01:07:02.978259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.419 [2024-10-15 01:07:03.004443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.989 01:07:03 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.989 01:07:03 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:50.989 01:07:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:51.249 01:07:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69612 00:05:51.249 01:07:03 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 69612 ']' 00:05:51.249 01:07:03 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 69612 00:05:51.249 01:07:03 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:51.249 01:07:03 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.249 01:07:03 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69612 00:05:51.249 01:07:03 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.249 01:07:03 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.249 01:07:03 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69612' 00:05:51.249 killing process with pid 69612 00:05:51.249 01:07:03 alias_rpc -- common/autotest_common.sh@969 -- # kill 69612 00:05:51.249 01:07:03 alias_rpc -- common/autotest_common.sh@974 -- # wait 69612 00:05:51.819 ************************************ 00:05:51.819 END TEST alias_rpc 00:05:51.819 ************************************ 00:05:51.819 00:05:51.819 real 0m1.745s 00:05:51.819 user 0m1.753s 00:05:51.819 sys 0m0.507s 00:05:51.819 01:07:04 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.819 01:07:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.819 01:07:04 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:51.819 01:07:04 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:51.819 01:07:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.819 01:07:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.819 01:07:04 -- common/autotest_common.sh@10 -- # set +x 00:05:51.819 ************************************ 00:05:51.819 START TEST spdkcli_tcp 00:05:51.819 ************************************ 00:05:51.819 01:07:04 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:51.819 * Looking for test storage... 00:05:51.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:51.819 01:07:04 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:51.820 01:07:04 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:51.820 01:07:04 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:51.820 01:07:04 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:51.820 01:07:04 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.820 01:07:04 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.820 01:07:04 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.820 01:07:04 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.820 01:07:04 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.820 01:07:04 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.820 01:07:04 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.820 01:07:04 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.820 01:07:04 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.820 01:07:04 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.820 01:07:04 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.820 01:07:04 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:51.820 01:07:04 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:51.820 01:07:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.820 01:07:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.820 01:07:04 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:51.820 01:07:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:51.820 01:07:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.820 01:07:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:51.820 01:07:04 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.820 01:07:04 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:52.080 01:07:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:52.080 01:07:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.080 01:07:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:52.080 01:07:04 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.080 01:07:04 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.080 01:07:04 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.080 01:07:04 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:52.080 01:07:04 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.080 01:07:04 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:52.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.080 --rc genhtml_branch_coverage=1 00:05:52.080 --rc genhtml_function_coverage=1 00:05:52.080 --rc genhtml_legend=1 00:05:52.080 --rc geninfo_all_blocks=1 00:05:52.080 --rc geninfo_unexecuted_blocks=1 00:05:52.080 00:05:52.080 ' 00:05:52.080 01:07:04 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:52.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.080 --rc genhtml_branch_coverage=1 00:05:52.080 --rc genhtml_function_coverage=1 00:05:52.080 --rc genhtml_legend=1 00:05:52.080 --rc geninfo_all_blocks=1 00:05:52.080 --rc geninfo_unexecuted_blocks=1 00:05:52.080 00:05:52.080 ' 00:05:52.080 01:07:04 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:52.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.080 --rc genhtml_branch_coverage=1 00:05:52.080 --rc genhtml_function_coverage=1 00:05:52.080 --rc genhtml_legend=1 00:05:52.080 --rc geninfo_all_blocks=1 00:05:52.080 --rc geninfo_unexecuted_blocks=1 00:05:52.080 00:05:52.080 ' 00:05:52.081 01:07:04 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:52.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.081 --rc genhtml_branch_coverage=1 00:05:52.081 --rc genhtml_function_coverage=1 00:05:52.081 --rc genhtml_legend=1 00:05:52.081 --rc geninfo_all_blocks=1 00:05:52.081 --rc geninfo_unexecuted_blocks=1 00:05:52.081 00:05:52.081 ' 00:05:52.081 01:07:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:52.081 01:07:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:52.081 01:07:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:52.081 01:07:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:52.081 01:07:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:52.081 01:07:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:52.081 01:07:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:52.081 01:07:04 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:52.081 01:07:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:52.081 01:07:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69686 00:05:52.081 01:07:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69686 00:05:52.081 01:07:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:52.081 01:07:04 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 69686 ']' 00:05:52.081 01:07:04 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.081 01:07:04 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.081 01:07:04 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.081 01:07:04 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.081 01:07:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:52.081 [2024-10-15 01:07:04.654083] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:05:52.081 [2024-10-15 01:07:04.654241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69686 ] 00:05:52.081 [2024-10-15 01:07:04.782602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.341 [2024-10-15 01:07:04.811094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.341 [2024-10-15 01:07:04.811254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.912 01:07:05 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.912 01:07:05 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:52.912 01:07:05 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69703 00:05:52.912 01:07:05 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:52.912 01:07:05 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:53.174 [ 00:05:53.174 "bdev_malloc_delete", 00:05:53.174 "bdev_malloc_create", 00:05:53.174 "bdev_null_resize", 00:05:53.174 "bdev_null_delete", 00:05:53.174 "bdev_null_create", 00:05:53.174 "bdev_nvme_cuse_unregister", 00:05:53.174 "bdev_nvme_cuse_register", 00:05:53.174 "bdev_opal_new_user", 00:05:53.174 "bdev_opal_set_lock_state", 00:05:53.174 "bdev_opal_delete", 00:05:53.174 "bdev_opal_get_info", 00:05:53.174 "bdev_opal_create", 00:05:53.174 "bdev_nvme_opal_revert", 00:05:53.174 "bdev_nvme_opal_init", 00:05:53.174 "bdev_nvme_send_cmd", 00:05:53.174 "bdev_nvme_set_keys", 00:05:53.174 "bdev_nvme_get_path_iostat", 00:05:53.174 "bdev_nvme_get_mdns_discovery_info", 00:05:53.174 "bdev_nvme_stop_mdns_discovery", 00:05:53.174 "bdev_nvme_start_mdns_discovery", 00:05:53.174 "bdev_nvme_set_multipath_policy", 00:05:53.174 "bdev_nvme_set_preferred_path", 00:05:53.174 "bdev_nvme_get_io_paths", 00:05:53.174 "bdev_nvme_remove_error_injection", 00:05:53.174 "bdev_nvme_add_error_injection", 00:05:53.174 "bdev_nvme_get_discovery_info", 00:05:53.174 "bdev_nvme_stop_discovery", 00:05:53.174 "bdev_nvme_start_discovery", 00:05:53.174 "bdev_nvme_get_controller_health_info", 00:05:53.174 "bdev_nvme_disable_controller", 00:05:53.174 "bdev_nvme_enable_controller", 00:05:53.174 "bdev_nvme_reset_controller", 00:05:53.174 "bdev_nvme_get_transport_statistics", 00:05:53.174 "bdev_nvme_apply_firmware", 00:05:53.174 "bdev_nvme_detach_controller", 00:05:53.174 "bdev_nvme_get_controllers", 00:05:53.174 "bdev_nvme_attach_controller", 00:05:53.174 "bdev_nvme_set_hotplug", 00:05:53.174 "bdev_nvme_set_options", 00:05:53.174 "bdev_passthru_delete", 00:05:53.174 "bdev_passthru_create", 00:05:53.174 "bdev_lvol_set_parent_bdev", 00:05:53.174 "bdev_lvol_set_parent", 00:05:53.174 "bdev_lvol_check_shallow_copy", 00:05:53.174 "bdev_lvol_start_shallow_copy", 00:05:53.174 "bdev_lvol_grow_lvstore", 00:05:53.174 "bdev_lvol_get_lvols", 00:05:53.174 "bdev_lvol_get_lvstores", 00:05:53.174 "bdev_lvol_delete", 00:05:53.174 "bdev_lvol_set_read_only", 00:05:53.174 "bdev_lvol_resize", 00:05:53.174 "bdev_lvol_decouple_parent", 00:05:53.174 "bdev_lvol_inflate", 00:05:53.174 "bdev_lvol_rename", 00:05:53.174 "bdev_lvol_clone_bdev", 00:05:53.174 "bdev_lvol_clone", 00:05:53.174 "bdev_lvol_snapshot", 00:05:53.174 "bdev_lvol_create", 00:05:53.174 "bdev_lvol_delete_lvstore", 00:05:53.174 "bdev_lvol_rename_lvstore", 00:05:53.174 "bdev_lvol_create_lvstore", 00:05:53.174 "bdev_raid_set_options", 00:05:53.174 "bdev_raid_remove_base_bdev", 00:05:53.174 "bdev_raid_add_base_bdev", 00:05:53.174 "bdev_raid_delete", 00:05:53.174 "bdev_raid_create", 00:05:53.174 "bdev_raid_get_bdevs", 00:05:53.174 "bdev_error_inject_error", 00:05:53.174 "bdev_error_delete", 00:05:53.174 "bdev_error_create", 00:05:53.174 "bdev_split_delete", 00:05:53.174 "bdev_split_create", 00:05:53.174 "bdev_delay_delete", 00:05:53.174 "bdev_delay_create", 00:05:53.174 "bdev_delay_update_latency", 00:05:53.174 "bdev_zone_block_delete", 00:05:53.174 "bdev_zone_block_create", 00:05:53.174 "blobfs_create", 00:05:53.174 "blobfs_detect", 00:05:53.174 "blobfs_set_cache_size", 00:05:53.174 "bdev_aio_delete", 00:05:53.174 "bdev_aio_rescan", 00:05:53.174 "bdev_aio_create", 00:05:53.174 "bdev_ftl_set_property", 00:05:53.174 "bdev_ftl_get_properties", 00:05:53.174 "bdev_ftl_get_stats", 00:05:53.174 "bdev_ftl_unmap", 00:05:53.174 "bdev_ftl_unload", 00:05:53.174 "bdev_ftl_delete", 00:05:53.174 "bdev_ftl_load", 00:05:53.174 "bdev_ftl_create", 00:05:53.174 "bdev_virtio_attach_controller", 00:05:53.174 "bdev_virtio_scsi_get_devices", 00:05:53.174 "bdev_virtio_detach_controller", 00:05:53.174 "bdev_virtio_blk_set_hotplug", 00:05:53.174 "bdev_iscsi_delete", 00:05:53.174 "bdev_iscsi_create", 00:05:53.174 "bdev_iscsi_set_options", 00:05:53.174 "accel_error_inject_error", 00:05:53.174 "ioat_scan_accel_module", 00:05:53.174 "dsa_scan_accel_module", 00:05:53.174 "iaa_scan_accel_module", 00:05:53.174 "keyring_file_remove_key", 00:05:53.174 "keyring_file_add_key", 00:05:53.174 "keyring_linux_set_options", 00:05:53.174 "fsdev_aio_delete", 00:05:53.174 "fsdev_aio_create", 00:05:53.174 "iscsi_get_histogram", 00:05:53.174 "iscsi_enable_histogram", 00:05:53.174 "iscsi_set_options", 00:05:53.174 "iscsi_get_auth_groups", 00:05:53.174 "iscsi_auth_group_remove_secret", 00:05:53.174 "iscsi_auth_group_add_secret", 00:05:53.174 "iscsi_delete_auth_group", 00:05:53.174 "iscsi_create_auth_group", 00:05:53.174 "iscsi_set_discovery_auth", 00:05:53.174 "iscsi_get_options", 00:05:53.174 "iscsi_target_node_request_logout", 00:05:53.174 "iscsi_target_node_set_redirect", 00:05:53.174 "iscsi_target_node_set_auth", 00:05:53.174 "iscsi_target_node_add_lun", 00:05:53.174 "iscsi_get_stats", 00:05:53.174 "iscsi_get_connections", 00:05:53.174 "iscsi_portal_group_set_auth", 00:05:53.174 "iscsi_start_portal_group", 00:05:53.174 "iscsi_delete_portal_group", 00:05:53.174 "iscsi_create_portal_group", 00:05:53.174 "iscsi_get_portal_groups", 00:05:53.174 "iscsi_delete_target_node", 00:05:53.174 "iscsi_target_node_remove_pg_ig_maps", 00:05:53.174 "iscsi_target_node_add_pg_ig_maps", 00:05:53.174 "iscsi_create_target_node", 00:05:53.174 "iscsi_get_target_nodes", 00:05:53.174 "iscsi_delete_initiator_group", 00:05:53.174 "iscsi_initiator_group_remove_initiators", 00:05:53.174 "iscsi_initiator_group_add_initiators", 00:05:53.174 "iscsi_create_initiator_group", 00:05:53.174 "iscsi_get_initiator_groups", 00:05:53.174 "nvmf_set_crdt", 00:05:53.174 "nvmf_set_config", 00:05:53.174 "nvmf_set_max_subsystems", 00:05:53.174 "nvmf_stop_mdns_prr", 00:05:53.174 "nvmf_publish_mdns_prr", 00:05:53.174 "nvmf_subsystem_get_listeners", 00:05:53.174 "nvmf_subsystem_get_qpairs", 00:05:53.174 "nvmf_subsystem_get_controllers", 00:05:53.174 "nvmf_get_stats", 00:05:53.174 "nvmf_get_transports", 00:05:53.174 "nvmf_create_transport", 00:05:53.174 "nvmf_get_targets", 00:05:53.174 "nvmf_delete_target", 00:05:53.174 "nvmf_create_target", 00:05:53.174 "nvmf_subsystem_allow_any_host", 00:05:53.174 "nvmf_subsystem_set_keys", 00:05:53.174 "nvmf_subsystem_remove_host", 00:05:53.174 "nvmf_subsystem_add_host", 00:05:53.174 "nvmf_ns_remove_host", 00:05:53.174 "nvmf_ns_add_host", 00:05:53.174 "nvmf_subsystem_remove_ns", 00:05:53.174 "nvmf_subsystem_set_ns_ana_group", 00:05:53.174 "nvmf_subsystem_add_ns", 00:05:53.174 "nvmf_subsystem_listener_set_ana_state", 00:05:53.174 "nvmf_discovery_get_referrals", 00:05:53.174 "nvmf_discovery_remove_referral", 00:05:53.174 "nvmf_discovery_add_referral", 00:05:53.174 "nvmf_subsystem_remove_listener", 00:05:53.174 "nvmf_subsystem_add_listener", 00:05:53.174 "nvmf_delete_subsystem", 00:05:53.174 "nvmf_create_subsystem", 00:05:53.174 "nvmf_get_subsystems", 00:05:53.174 "env_dpdk_get_mem_stats", 00:05:53.174 "nbd_get_disks", 00:05:53.174 "nbd_stop_disk", 00:05:53.174 "nbd_start_disk", 00:05:53.174 "ublk_recover_disk", 00:05:53.175 "ublk_get_disks", 00:05:53.175 "ublk_stop_disk", 00:05:53.175 "ublk_start_disk", 00:05:53.175 "ublk_destroy_target", 00:05:53.175 "ublk_create_target", 00:05:53.175 "virtio_blk_create_transport", 00:05:53.175 "virtio_blk_get_transports", 00:05:53.175 "vhost_controller_set_coalescing", 00:05:53.175 "vhost_get_controllers", 00:05:53.175 "vhost_delete_controller", 00:05:53.175 "vhost_create_blk_controller", 00:05:53.175 "vhost_scsi_controller_remove_target", 00:05:53.175 "vhost_scsi_controller_add_target", 00:05:53.175 "vhost_start_scsi_controller", 00:05:53.175 "vhost_create_scsi_controller", 00:05:53.175 "thread_set_cpumask", 00:05:53.175 "scheduler_set_options", 00:05:53.175 "framework_get_governor", 00:05:53.175 "framework_get_scheduler", 00:05:53.175 "framework_set_scheduler", 00:05:53.175 "framework_get_reactors", 00:05:53.175 "thread_get_io_channels", 00:05:53.175 "thread_get_pollers", 00:05:53.175 "thread_get_stats", 00:05:53.175 "framework_monitor_context_switch", 00:05:53.175 "spdk_kill_instance", 00:05:53.175 "log_enable_timestamps", 00:05:53.175 "log_get_flags", 00:05:53.175 "log_clear_flag", 00:05:53.175 "log_set_flag", 00:05:53.175 "log_get_level", 00:05:53.175 "log_set_level", 00:05:53.175 "log_get_print_level", 00:05:53.175 "log_set_print_level", 00:05:53.175 "framework_enable_cpumask_locks", 00:05:53.175 "framework_disable_cpumask_locks", 00:05:53.175 "framework_wait_init", 00:05:53.175 "framework_start_init", 00:05:53.175 "scsi_get_devices", 00:05:53.175 "bdev_get_histogram", 00:05:53.175 "bdev_enable_histogram", 00:05:53.175 "bdev_set_qos_limit", 00:05:53.175 "bdev_set_qd_sampling_period", 00:05:53.175 "bdev_get_bdevs", 00:05:53.175 "bdev_reset_iostat", 00:05:53.175 "bdev_get_iostat", 00:05:53.175 "bdev_examine", 00:05:53.175 "bdev_wait_for_examine", 00:05:53.175 "bdev_set_options", 00:05:53.175 "accel_get_stats", 00:05:53.175 "accel_set_options", 00:05:53.175 "accel_set_driver", 00:05:53.175 "accel_crypto_key_destroy", 00:05:53.175 "accel_crypto_keys_get", 00:05:53.175 "accel_crypto_key_create", 00:05:53.175 "accel_assign_opc", 00:05:53.175 "accel_get_module_info", 00:05:53.175 "accel_get_opc_assignments", 00:05:53.175 "vmd_rescan", 00:05:53.175 "vmd_remove_device", 00:05:53.175 "vmd_enable", 00:05:53.175 "sock_get_default_impl", 00:05:53.175 "sock_set_default_impl", 00:05:53.175 "sock_impl_set_options", 00:05:53.175 "sock_impl_get_options", 00:05:53.175 "iobuf_get_stats", 00:05:53.175 "iobuf_set_options", 00:05:53.175 "keyring_get_keys", 00:05:53.175 "framework_get_pci_devices", 00:05:53.175 "framework_get_config", 00:05:53.175 "framework_get_subsystems", 00:05:53.175 "fsdev_set_opts", 00:05:53.175 "fsdev_get_opts", 00:05:53.175 "trace_get_info", 00:05:53.175 "trace_get_tpoint_group_mask", 00:05:53.175 "trace_disable_tpoint_group", 00:05:53.175 "trace_enable_tpoint_group", 00:05:53.175 "trace_clear_tpoint_mask", 00:05:53.175 "trace_set_tpoint_mask", 00:05:53.175 "notify_get_notifications", 00:05:53.175 "notify_get_types", 00:05:53.175 "spdk_get_version", 00:05:53.175 "rpc_get_methods" 00:05:53.175 ] 00:05:53.175 01:07:05 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:53.175 01:07:05 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:53.175 01:07:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.175 01:07:05 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:53.175 01:07:05 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69686 00:05:53.175 01:07:05 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 69686 ']' 00:05:53.175 01:07:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 69686 00:05:53.175 01:07:05 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:53.175 01:07:05 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.175 01:07:05 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69686 00:05:53.175 killing process with pid 69686 00:05:53.175 01:07:05 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.175 01:07:05 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.175 01:07:05 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69686' 00:05:53.175 01:07:05 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 69686 00:05:53.175 01:07:05 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 69686 00:05:53.435 ************************************ 00:05:53.435 END TEST spdkcli_tcp 00:05:53.435 ************************************ 00:05:53.435 00:05:53.435 real 0m1.797s 00:05:53.435 user 0m3.054s 00:05:53.435 sys 0m0.550s 00:05:53.436 01:07:06 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.436 01:07:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.696 01:07:06 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.696 01:07:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.696 01:07:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.696 01:07:06 -- common/autotest_common.sh@10 -- # set +x 00:05:53.696 ************************************ 00:05:53.696 START TEST dpdk_mem_utility 00:05:53.696 ************************************ 00:05:53.696 01:07:06 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.696 * Looking for test storage... 00:05:53.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:53.696 01:07:06 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:53.696 01:07:06 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:53.696 01:07:06 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:53.696 01:07:06 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:53.696 01:07:06 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.696 01:07:06 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.696 01:07:06 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.696 01:07:06 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.696 01:07:06 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.696 01:07:06 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.696 01:07:06 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.696 01:07:06 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.696 01:07:06 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.696 01:07:06 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.696 01:07:06 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.696 01:07:06 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:53.696 01:07:06 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:53.696 01:07:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.696 01:07:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.696 01:07:06 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:53.697 01:07:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:53.697 01:07:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.697 01:07:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:53.697 01:07:06 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.697 01:07:06 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:53.697 01:07:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:53.697 01:07:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.697 01:07:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:53.697 01:07:06 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.697 01:07:06 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.697 01:07:06 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.697 01:07:06 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:53.697 01:07:06 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.697 01:07:06 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:53.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.697 --rc genhtml_branch_coverage=1 00:05:53.697 --rc genhtml_function_coverage=1 00:05:53.697 --rc genhtml_legend=1 00:05:53.697 --rc geninfo_all_blocks=1 00:05:53.697 --rc geninfo_unexecuted_blocks=1 00:05:53.697 00:05:53.697 ' 00:05:53.697 01:07:06 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:53.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.697 --rc genhtml_branch_coverage=1 00:05:53.697 --rc genhtml_function_coverage=1 00:05:53.697 --rc genhtml_legend=1 00:05:53.697 --rc geninfo_all_blocks=1 00:05:53.697 --rc geninfo_unexecuted_blocks=1 00:05:53.697 00:05:53.697 ' 00:05:53.697 01:07:06 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:53.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.697 --rc genhtml_branch_coverage=1 00:05:53.697 --rc genhtml_function_coverage=1 00:05:53.697 --rc genhtml_legend=1 00:05:53.697 --rc geninfo_all_blocks=1 00:05:53.697 --rc geninfo_unexecuted_blocks=1 00:05:53.697 00:05:53.697 ' 00:05:53.697 01:07:06 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:53.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.697 --rc genhtml_branch_coverage=1 00:05:53.697 --rc genhtml_function_coverage=1 00:05:53.697 --rc genhtml_legend=1 00:05:53.697 --rc geninfo_all_blocks=1 00:05:53.697 --rc geninfo_unexecuted_blocks=1 00:05:53.697 00:05:53.697 ' 00:05:53.697 01:07:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:53.697 01:07:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69786 00:05:53.697 01:07:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.697 01:07:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69786 00:05:53.697 01:07:06 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 69786 ']' 00:05:53.697 01:07:06 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.697 01:07:06 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.697 01:07:06 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.697 01:07:06 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.697 01:07:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:53.958 [2024-10-15 01:07:06.503313] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:05:53.958 [2024-10-15 01:07:06.503530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69786 ] 00:05:53.958 [2024-10-15 01:07:06.647533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.958 [2024-10-15 01:07:06.675064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.900 01:07:07 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.900 01:07:07 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:54.900 01:07:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:54.900 01:07:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:54.900 01:07:07 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.900 01:07:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:54.900 { 00:05:54.900 "filename": "/tmp/spdk_mem_dump.txt" 00:05:54.900 } 00:05:54.900 01:07:07 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.900 01:07:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:54.900 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:54.900 1 heaps totaling size 810.000000 MiB 00:05:54.900 size: 810.000000 MiB heap id: 0 00:05:54.900 end heaps---------- 00:05:54.900 9 mempools totaling size 595.772034 MiB 00:05:54.900 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:54.900 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:54.900 size: 92.545471 MiB name: bdev_io_69786 00:05:54.900 size: 50.003479 MiB name: msgpool_69786 00:05:54.900 size: 36.509338 MiB name: fsdev_io_69786 00:05:54.900 size: 21.763794 MiB name: PDU_Pool 00:05:54.900 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:54.900 size: 4.133484 MiB name: evtpool_69786 00:05:54.900 size: 0.026123 MiB name: Session_Pool 00:05:54.900 end mempools------- 00:05:54.900 6 memzones totaling size 4.142822 MiB 00:05:54.900 size: 1.000366 MiB name: RG_ring_0_69786 00:05:54.900 size: 1.000366 MiB name: RG_ring_1_69786 00:05:54.900 size: 1.000366 MiB name: RG_ring_4_69786 00:05:54.900 size: 1.000366 MiB name: RG_ring_5_69786 00:05:54.900 size: 0.125366 MiB name: RG_ring_2_69786 00:05:54.900 size: 0.015991 MiB name: RG_ring_3_69786 00:05:54.900 end memzones------- 00:05:54.900 01:07:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:54.900 heap id: 0 total size: 810.000000 MiB number of busy elements: 308 number of free elements: 15 00:05:54.900 list of free elements. size: 10.814148 MiB 00:05:54.900 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:54.900 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:54.900 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:54.900 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:54.900 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:54.900 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:54.900 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:54.900 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:54.900 element at address: 0x20001a600000 with size: 0.568054 MiB 00:05:54.900 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:54.900 element at address: 0x200000c00000 with size: 0.487000 MiB 00:05:54.900 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:54.900 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:54.900 element at address: 0x200027a00000 with size: 0.396301 MiB 00:05:54.900 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:54.900 list of standard malloc elements. size: 199.266968 MiB 00:05:54.900 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:54.900 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:54.900 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:54.900 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:54.900 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:54.900 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:54.900 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:54.900 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:54.900 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:54.900 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:54.900 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:54.900 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:54.900 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:54.900 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:54.900 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:54.900 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:54.900 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:54.900 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:54.900 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:54.900 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:54.900 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:54.900 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:54.900 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:54.900 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:54.900 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:54.900 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:54.901 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:54.901 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:54.901 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:54.901 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:54.901 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:54.901 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:54.901 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:54.901 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:54.901 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:54.901 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:54.901 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:54.901 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:54.901 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:54.901 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:54.901 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20001a691780 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20001a691840 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20001a691900 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20001a692080 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20001a692140 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20001a692200 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20001a692380 with size: 0.000183 MiB 00:05:54.901 element at address: 0x20001a692440 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a692500 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a692680 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a692740 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a692800 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a692980 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a693040 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a693100 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a693280 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a693340 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a693400 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a693580 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a693640 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a693700 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a693880 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a693940 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a694000 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a694180 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a694240 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a694300 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a694480 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a694540 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a694600 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a694780 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a694840 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a694900 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a695080 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a695140 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a695200 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:54.902 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a65740 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a65800 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6c400 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:05:54.902 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:05:54.903 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:05:54.903 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:05:54.903 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:05:54.903 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:05:54.903 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:05:54.903 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:05:54.903 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:54.903 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:54.903 list of memzone associated elements. size: 599.918884 MiB 00:05:54.903 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:54.903 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:54.903 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:54.903 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:54.903 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:54.903 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_69786_0 00:05:54.903 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:54.903 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69786_0 00:05:54.903 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:54.903 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_69786_0 00:05:54.903 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:54.903 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:54.903 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:54.903 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:54.903 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:54.903 associated memzone info: size: 3.000122 MiB name: MP_evtpool_69786_0 00:05:54.903 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:54.903 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69786 00:05:54.903 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:54.903 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69786 00:05:54.903 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:54.903 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:54.903 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:54.903 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:54.903 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:54.903 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:54.903 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:54.903 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:54.903 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:54.903 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69786 00:05:54.903 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:54.903 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69786 00:05:54.903 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:54.903 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69786 00:05:54.903 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:54.903 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69786 00:05:54.903 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:54.903 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_69786 00:05:54.903 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:54.903 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69786 00:05:54.903 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:54.903 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:54.903 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:54.903 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:54.903 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:54.903 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:54.903 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:54.903 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_69786 00:05:54.903 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:54.903 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69786 00:05:54.903 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:54.903 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:54.903 element at address: 0x200027a658c0 with size: 0.023743 MiB 00:05:54.903 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:54.903 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:54.903 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69786 00:05:54.903 element at address: 0x200027a6ba00 with size: 0.002441 MiB 00:05:54.903 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:54.903 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:54.903 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69786 00:05:54.903 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:54.903 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_69786 00:05:54.903 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:54.903 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69786 00:05:54.903 element at address: 0x200027a6c4c0 with size: 0.000305 MiB 00:05:54.903 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:54.903 01:07:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:54.903 01:07:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69786 00:05:54.903 01:07:07 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 69786 ']' 00:05:54.903 01:07:07 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 69786 00:05:54.903 01:07:07 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:54.903 01:07:07 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.903 01:07:07 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69786 00:05:54.903 killing process with pid 69786 00:05:54.903 01:07:07 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.903 01:07:07 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.903 01:07:07 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69786' 00:05:54.903 01:07:07 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 69786 00:05:54.903 01:07:07 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 69786 00:05:55.165 00:05:55.165 real 0m1.633s 00:05:55.165 user 0m1.585s 00:05:55.165 sys 0m0.485s 00:05:55.165 ************************************ 00:05:55.165 END TEST dpdk_mem_utility 00:05:55.165 ************************************ 00:05:55.165 01:07:07 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.165 01:07:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:55.165 01:07:07 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:55.165 01:07:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.165 01:07:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.165 01:07:07 -- common/autotest_common.sh@10 -- # set +x 00:05:55.165 ************************************ 00:05:55.165 START TEST event 00:05:55.165 ************************************ 00:05:55.165 01:07:07 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:55.430 * Looking for test storage... 00:05:55.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:55.430 01:07:08 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:55.430 01:07:08 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:55.430 01:07:08 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:55.430 01:07:08 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:55.430 01:07:08 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.430 01:07:08 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.430 01:07:08 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.430 01:07:08 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.430 01:07:08 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.430 01:07:08 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.430 01:07:08 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.430 01:07:08 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.430 01:07:08 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.430 01:07:08 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.430 01:07:08 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.431 01:07:08 event -- scripts/common.sh@344 -- # case "$op" in 00:05:55.431 01:07:08 event -- scripts/common.sh@345 -- # : 1 00:05:55.431 01:07:08 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.431 01:07:08 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.431 01:07:08 event -- scripts/common.sh@365 -- # decimal 1 00:05:55.431 01:07:08 event -- scripts/common.sh@353 -- # local d=1 00:05:55.431 01:07:08 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.431 01:07:08 event -- scripts/common.sh@355 -- # echo 1 00:05:55.431 01:07:08 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.431 01:07:08 event -- scripts/common.sh@366 -- # decimal 2 00:05:55.431 01:07:08 event -- scripts/common.sh@353 -- # local d=2 00:05:55.431 01:07:08 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.431 01:07:08 event -- scripts/common.sh@355 -- # echo 2 00:05:55.431 01:07:08 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.431 01:07:08 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.431 01:07:08 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.431 01:07:08 event -- scripts/common.sh@368 -- # return 0 00:05:55.431 01:07:08 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.431 01:07:08 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:55.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.431 --rc genhtml_branch_coverage=1 00:05:55.431 --rc genhtml_function_coverage=1 00:05:55.431 --rc genhtml_legend=1 00:05:55.431 --rc geninfo_all_blocks=1 00:05:55.431 --rc geninfo_unexecuted_blocks=1 00:05:55.431 00:05:55.431 ' 00:05:55.431 01:07:08 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:55.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.431 --rc genhtml_branch_coverage=1 00:05:55.431 --rc genhtml_function_coverage=1 00:05:55.431 --rc genhtml_legend=1 00:05:55.431 --rc geninfo_all_blocks=1 00:05:55.431 --rc geninfo_unexecuted_blocks=1 00:05:55.431 00:05:55.431 ' 00:05:55.431 01:07:08 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:55.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.431 --rc genhtml_branch_coverage=1 00:05:55.431 --rc genhtml_function_coverage=1 00:05:55.431 --rc genhtml_legend=1 00:05:55.431 --rc geninfo_all_blocks=1 00:05:55.431 --rc geninfo_unexecuted_blocks=1 00:05:55.431 00:05:55.431 ' 00:05:55.431 01:07:08 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:55.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.431 --rc genhtml_branch_coverage=1 00:05:55.431 --rc genhtml_function_coverage=1 00:05:55.431 --rc genhtml_legend=1 00:05:55.431 --rc geninfo_all_blocks=1 00:05:55.431 --rc geninfo_unexecuted_blocks=1 00:05:55.431 00:05:55.431 ' 00:05:55.431 01:07:08 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:55.431 01:07:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:55.431 01:07:08 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:55.432 01:07:08 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:55.432 01:07:08 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.432 01:07:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.432 ************************************ 00:05:55.432 START TEST event_perf 00:05:55.432 ************************************ 00:05:55.432 01:07:08 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:55.735 Running I/O for 1 seconds...[2024-10-15 01:07:08.164159] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:05:55.735 [2024-10-15 01:07:08.164332] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69872 ] 00:05:55.735 [2024-10-15 01:07:08.308462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.735 [2024-10-15 01:07:08.337896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.735 [2024-10-15 01:07:08.338098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.735 [2024-10-15 01:07:08.338324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.735 [2024-10-15 01:07:08.338144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.674 Running I/O for 1 seconds... 00:05:56.674 lcore 0: 207402 00:05:56.674 lcore 1: 207402 00:05:56.674 lcore 2: 207404 00:05:56.674 lcore 3: 207405 00:05:56.674 done. 00:05:56.934 00:05:56.934 real 0m1.280s 00:05:56.934 user 0m4.060s 00:05:56.934 sys 0m0.101s 00:05:56.934 01:07:09 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.934 01:07:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:56.934 ************************************ 00:05:56.934 END TEST event_perf 00:05:56.934 ************************************ 00:05:56.934 01:07:09 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:56.934 01:07:09 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:56.934 01:07:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.934 01:07:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.934 ************************************ 00:05:56.934 START TEST event_reactor 00:05:56.934 ************************************ 00:05:56.934 01:07:09 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:56.934 [2024-10-15 01:07:09.515409] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:05:56.934 [2024-10-15 01:07:09.515588] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69906 ] 00:05:57.200 [2024-10-15 01:07:09.658941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.200 [2024-10-15 01:07:09.685374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.146 test_start 00:05:58.146 oneshot 00:05:58.146 tick 100 00:05:58.146 tick 100 00:05:58.146 tick 250 00:05:58.146 tick 100 00:05:58.146 tick 100 00:05:58.146 tick 100 00:05:58.146 tick 250 00:05:58.146 tick 500 00:05:58.146 tick 100 00:05:58.146 tick 100 00:05:58.146 tick 250 00:05:58.146 tick 100 00:05:58.146 tick 100 00:05:58.146 test_end 00:05:58.146 00:05:58.146 real 0m1.271s 00:05:58.146 user 0m1.095s 00:05:58.146 sys 0m0.069s 00:05:58.146 01:07:10 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.146 01:07:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:58.146 ************************************ 00:05:58.146 END TEST event_reactor 00:05:58.146 ************************************ 00:05:58.146 01:07:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:58.146 01:07:10 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:58.146 01:07:10 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.146 01:07:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.146 ************************************ 00:05:58.146 START TEST event_reactor_perf 00:05:58.146 ************************************ 00:05:58.146 01:07:10 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:58.146 [2024-10-15 01:07:10.853636] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:05:58.146 [2024-10-15 01:07:10.853810] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69937 ] 00:05:58.406 [2024-10-15 01:07:10.996696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.406 [2024-10-15 01:07:11.022454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.788 test_start 00:05:59.788 test_end 00:05:59.788 Performance: 405196 events per second 00:05:59.788 00:05:59.788 real 0m1.266s 00:05:59.788 user 0m1.089s 00:05:59.788 sys 0m0.070s 00:05:59.788 01:07:12 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.788 01:07:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:59.788 ************************************ 00:05:59.788 END TEST event_reactor_perf 00:05:59.788 ************************************ 00:05:59.788 01:07:12 event -- event/event.sh@49 -- # uname -s 00:05:59.788 01:07:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:59.788 01:07:12 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:59.788 01:07:12 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.788 01:07:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.788 01:07:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.788 ************************************ 00:05:59.788 START TEST event_scheduler 00:05:59.788 ************************************ 00:05:59.788 01:07:12 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:59.788 * Looking for test storage... 00:05:59.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:59.788 01:07:12 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:59.788 01:07:12 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:59.788 01:07:12 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:59.788 01:07:12 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.788 01:07:12 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:59.788 01:07:12 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.788 01:07:12 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:59.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.788 --rc genhtml_branch_coverage=1 00:05:59.788 --rc genhtml_function_coverage=1 00:05:59.788 --rc genhtml_legend=1 00:05:59.788 --rc geninfo_all_blocks=1 00:05:59.788 --rc geninfo_unexecuted_blocks=1 00:05:59.788 00:05:59.788 ' 00:05:59.788 01:07:12 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:59.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.788 --rc genhtml_branch_coverage=1 00:05:59.788 --rc genhtml_function_coverage=1 00:05:59.788 --rc genhtml_legend=1 00:05:59.788 --rc geninfo_all_blocks=1 00:05:59.788 --rc geninfo_unexecuted_blocks=1 00:05:59.788 00:05:59.788 ' 00:05:59.788 01:07:12 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:59.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.788 --rc genhtml_branch_coverage=1 00:05:59.788 --rc genhtml_function_coverage=1 00:05:59.788 --rc genhtml_legend=1 00:05:59.788 --rc geninfo_all_blocks=1 00:05:59.788 --rc geninfo_unexecuted_blocks=1 00:05:59.788 00:05:59.788 ' 00:05:59.788 01:07:12 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:59.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.788 --rc genhtml_branch_coverage=1 00:05:59.788 --rc genhtml_function_coverage=1 00:05:59.788 --rc genhtml_legend=1 00:05:59.788 --rc geninfo_all_blocks=1 00:05:59.788 --rc geninfo_unexecuted_blocks=1 00:05:59.788 00:05:59.788 ' 00:05:59.788 01:07:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:59.788 01:07:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70013 00:05:59.788 01:07:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:59.788 01:07:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.788 01:07:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70013 00:05:59.788 01:07:12 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70013 ']' 00:05:59.788 01:07:12 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.788 01:07:12 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.788 01:07:12 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.788 01:07:12 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.788 01:07:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.788 [2024-10-15 01:07:12.457918] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:05:59.788 [2024-10-15 01:07:12.458128] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70013 ] 00:06:00.048 [2024-10-15 01:07:12.584877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.048 [2024-10-15 01:07:12.614449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.048 [2024-10-15 01:07:12.614643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.048 [2024-10-15 01:07:12.614721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.048 [2024-10-15 01:07:12.614889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.617 01:07:13 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.617 01:07:13 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:00.617 01:07:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:00.617 01:07:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.617 01:07:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:00.617 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:00.617 POWER: Cannot set governor of lcore 0 to userspace 00:06:00.617 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:00.617 POWER: Cannot set governor of lcore 0 to performance 00:06:00.617 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:00.617 POWER: Cannot set governor of lcore 0 to userspace 00:06:00.617 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:00.618 POWER: Unable to set Power Management Environment for lcore 0 00:06:00.618 [2024-10-15 01:07:13.287471] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:00.618 [2024-10-15 01:07:13.287497] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:00.618 [2024-10-15 01:07:13.287542] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:00.618 [2024-10-15 01:07:13.287566] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:00.618 [2024-10-15 01:07:13.287578] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:00.618 [2024-10-15 01:07:13.287591] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:00.618 01:07:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.618 01:07:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:00.618 01:07:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.618 01:07:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:00.878 [2024-10-15 01:07:13.357252] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:00.878 01:07:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.878 01:07:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:00.878 01:07:13 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.878 01:07:13 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.878 01:07:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:00.878 ************************************ 00:06:00.878 START TEST scheduler_create_thread 00:06:00.878 ************************************ 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.878 2 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.878 3 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.878 4 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.878 5 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.878 6 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.878 7 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.878 8 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.878 9 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.878 10 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.878 01:07:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.259 01:07:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.259 01:07:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:02.259 01:07:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:02.259 01:07:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.259 01:07:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.638 ************************************ 00:06:03.638 END TEST scheduler_create_thread 00:06:03.638 ************************************ 00:06:03.638 01:07:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.638 00:06:03.638 real 0m2.610s 00:06:03.638 user 0m0.015s 00:06:03.638 sys 0m0.003s 00:06:03.638 01:07:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.638 01:07:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.638 01:07:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:03.638 01:07:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70013 00:06:03.638 01:07:16 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70013 ']' 00:06:03.638 01:07:16 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70013 00:06:03.638 01:07:16 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:03.638 01:07:16 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.638 01:07:16 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70013 00:06:03.638 killing process with pid 70013 00:06:03.638 01:07:16 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:03.638 01:07:16 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:03.638 01:07:16 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70013' 00:06:03.638 01:07:16 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70013 00:06:03.638 01:07:16 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70013 00:06:03.897 [2024-10-15 01:07:16.455194] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:04.158 ************************************ 00:06:04.158 END TEST event_scheduler 00:06:04.158 ************************************ 00:06:04.158 00:06:04.158 real 0m4.512s 00:06:04.158 user 0m8.190s 00:06:04.158 sys 0m0.432s 00:06:04.158 01:07:16 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.158 01:07:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.158 01:07:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:04.158 01:07:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:04.158 01:07:16 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.158 01:07:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.158 01:07:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.158 ************************************ 00:06:04.158 START TEST app_repeat 00:06:04.158 ************************************ 00:06:04.158 01:07:16 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:04.158 01:07:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.158 01:07:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.158 01:07:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:04.158 01:07:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.158 01:07:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:04.158 01:07:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:04.158 01:07:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:04.158 01:07:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70108 00:06:04.158 01:07:16 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:04.158 01:07:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.158 01:07:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70108' 00:06:04.158 Process app_repeat pid: 70108 00:06:04.158 01:07:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.158 01:07:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:04.158 spdk_app_start Round 0 00:06:04.158 01:07:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70108 /var/tmp/spdk-nbd.sock 00:06:04.158 01:07:16 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70108 ']' 00:06:04.158 01:07:16 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.158 01:07:16 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.158 01:07:16 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.158 01:07:16 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.158 01:07:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.158 [2024-10-15 01:07:16.800904] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:04.158 [2024-10-15 01:07:16.801104] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70108 ] 00:06:04.418 [2024-10-15 01:07:16.945452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.418 [2024-10-15 01:07:16.972653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.418 [2024-10-15 01:07:16.972742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.988 01:07:17 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.988 01:07:17 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:04.988 01:07:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.248 Malloc0 00:06:05.248 01:07:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.507 Malloc1 00:06:05.507 01:07:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.507 01:07:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.507 01:07:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.507 01:07:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.507 01:07:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.507 01:07:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.507 01:07:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.507 01:07:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.507 01:07:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.507 01:07:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.507 01:07:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.507 01:07:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.507 01:07:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.507 01:07:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.507 01:07:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.507 01:07:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.507 /dev/nbd0 00:06:05.767 01:07:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.767 01:07:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.767 01:07:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:05.767 01:07:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:05.767 01:07:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:05.767 01:07:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:05.767 01:07:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:05.767 01:07:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:05.767 01:07:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:05.767 01:07:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:05.767 01:07:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.767 1+0 records in 00:06:05.767 1+0 records out 00:06:05.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026823 s, 15.3 MB/s 00:06:05.767 01:07:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.767 01:07:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:05.767 01:07:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.767 01:07:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:05.767 01:07:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:05.767 01:07:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.767 01:07:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.767 01:07:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.767 /dev/nbd1 00:06:06.027 01:07:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.027 01:07:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.027 01:07:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:06.027 01:07:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:06.027 01:07:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:06.027 01:07:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:06.027 01:07:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:06.027 01:07:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:06.027 01:07:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:06.027 01:07:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:06.027 01:07:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.027 1+0 records in 00:06:06.027 1+0 records out 00:06:06.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187002 s, 21.9 MB/s 00:06:06.027 01:07:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.027 01:07:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:06.027 01:07:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.027 01:07:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:06.027 01:07:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:06.027 01:07:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.027 01:07:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.027 01:07:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.027 01:07:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.027 01:07:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.027 01:07:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.027 { 00:06:06.027 "nbd_device": "/dev/nbd0", 00:06:06.027 "bdev_name": "Malloc0" 00:06:06.027 }, 00:06:06.027 { 00:06:06.027 "nbd_device": "/dev/nbd1", 00:06:06.027 "bdev_name": "Malloc1" 00:06:06.027 } 00:06:06.027 ]' 00:06:06.027 01:07:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.027 { 00:06:06.027 "nbd_device": "/dev/nbd0", 00:06:06.027 "bdev_name": "Malloc0" 00:06:06.027 }, 00:06:06.027 { 00:06:06.027 "nbd_device": "/dev/nbd1", 00:06:06.027 "bdev_name": "Malloc1" 00:06:06.027 } 00:06:06.027 ]' 00:06:06.027 01:07:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.287 /dev/nbd1' 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.287 /dev/nbd1' 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.287 256+0 records in 00:06:06.287 256+0 records out 00:06:06.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137783 s, 76.1 MB/s 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.287 256+0 records in 00:06:06.287 256+0 records out 00:06:06.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017503 s, 59.9 MB/s 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.287 256+0 records in 00:06:06.287 256+0 records out 00:06:06.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182242 s, 57.5 MB/s 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.287 01:07:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.546 01:07:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.546 01:07:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.546 01:07:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.546 01:07:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.546 01:07:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.546 01:07:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.546 01:07:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.546 01:07:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.546 01:07:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.546 01:07:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.806 01:07:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.806 01:07:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.806 01:07:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.806 01:07:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.806 01:07:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.806 01:07:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.806 01:07:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.806 01:07:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.806 01:07:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.806 01:07:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.806 01:07:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.806 01:07:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.806 01:07:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.806 01:07:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.066 01:07:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:07.066 01:07:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:07.066 01:07:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.066 01:07:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:07.066 01:07:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:07.066 01:07:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:07.066 01:07:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:07.066 01:07:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:07.066 01:07:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:07.066 01:07:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.066 01:07:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:07.326 [2024-10-15 01:07:19.904058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.326 [2024-10-15 01:07:19.928605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.326 [2024-10-15 01:07:19.928605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.326 [2024-10-15 01:07:19.970092] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.326 [2024-10-15 01:07:19.970155] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.620 spdk_app_start Round 1 00:06:10.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.620 01:07:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:10.620 01:07:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:10.620 01:07:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70108 /var/tmp/spdk-nbd.sock 00:06:10.620 01:07:22 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70108 ']' 00:06:10.620 01:07:22 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.620 01:07:22 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.620 01:07:22 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.620 01:07:22 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.620 01:07:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.620 01:07:22 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.620 01:07:22 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:10.620 01:07:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.620 Malloc0 00:06:10.620 01:07:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.880 Malloc1 00:06:10.880 01:07:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.880 01:07:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.880 01:07:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.880 01:07:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.880 01:07:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.880 01:07:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.880 01:07:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.880 01:07:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.880 01:07:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.880 01:07:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.880 01:07:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.880 01:07:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.880 01:07:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.880 01:07:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.880 01:07:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.880 01:07:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.140 /dev/nbd0 00:06:11.140 01:07:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.140 01:07:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.140 01:07:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:11.140 01:07:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:11.140 01:07:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:11.140 01:07:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:11.140 01:07:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:11.140 01:07:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:11.140 01:07:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:11.140 01:07:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:11.140 01:07:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.140 1+0 records in 00:06:11.140 1+0 records out 00:06:11.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293067 s, 14.0 MB/s 00:06:11.140 01:07:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.140 01:07:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:11.140 01:07:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.140 01:07:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:11.140 01:07:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:11.140 01:07:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.140 01:07:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.140 01:07:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.140 /dev/nbd1 00:06:11.400 01:07:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.400 01:07:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.400 01:07:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:11.400 01:07:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:11.400 01:07:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:11.400 01:07:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:11.400 01:07:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:11.400 01:07:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:11.400 01:07:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:11.400 01:07:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:11.400 01:07:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.400 1+0 records in 00:06:11.400 1+0 records out 00:06:11.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216643 s, 18.9 MB/s 00:06:11.400 01:07:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.400 01:07:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:11.400 01:07:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.400 01:07:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:11.400 01:07:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:11.400 01:07:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.400 01:07:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.400 01:07:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.400 01:07:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.400 01:07:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.400 01:07:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.400 { 00:06:11.400 "nbd_device": "/dev/nbd0", 00:06:11.400 "bdev_name": "Malloc0" 00:06:11.400 }, 00:06:11.400 { 00:06:11.400 "nbd_device": "/dev/nbd1", 00:06:11.400 "bdev_name": "Malloc1" 00:06:11.400 } 00:06:11.400 ]' 00:06:11.400 01:07:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.400 { 00:06:11.400 "nbd_device": "/dev/nbd0", 00:06:11.400 "bdev_name": "Malloc0" 00:06:11.400 }, 00:06:11.400 { 00:06:11.401 "nbd_device": "/dev/nbd1", 00:06:11.401 "bdev_name": "Malloc1" 00:06:11.401 } 00:06:11.401 ]' 00:06:11.401 01:07:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.661 /dev/nbd1' 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.661 /dev/nbd1' 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.661 256+0 records in 00:06:11.661 256+0 records out 00:06:11.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134583 s, 77.9 MB/s 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.661 256+0 records in 00:06:11.661 256+0 records out 00:06:11.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230187 s, 45.6 MB/s 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.661 256+0 records in 00:06:11.661 256+0 records out 00:06:11.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238179 s, 44.0 MB/s 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.661 01:07:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.926 01:07:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.926 01:07:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.926 01:07:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.926 01:07:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.926 01:07:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.926 01:07:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.926 01:07:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.926 01:07:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.926 01:07:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.926 01:07:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.206 01:07:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.206 01:07:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.206 01:07:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.206 01:07:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.206 01:07:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.206 01:07:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.206 01:07:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.206 01:07:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.206 01:07:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.206 01:07:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.206 01:07:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.206 01:07:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.206 01:07:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.206 01:07:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.484 01:07:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.484 01:07:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.484 01:07:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.484 01:07:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:12.484 01:07:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.484 01:07:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.484 01:07:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.484 01:07:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.484 01:07:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.484 01:07:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.484 01:07:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.744 [2024-10-15 01:07:25.291570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.744 [2024-10-15 01:07:25.315441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.744 [2024-10-15 01:07:25.315463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.744 [2024-10-15 01:07:25.357497] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.744 [2024-10-15 01:07:25.357558] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:16.040 spdk_app_start Round 2 00:06:16.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.040 01:07:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:16.040 01:07:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:16.040 01:07:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70108 /var/tmp/spdk-nbd.sock 00:06:16.040 01:07:28 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70108 ']' 00:06:16.040 01:07:28 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.040 01:07:28 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.040 01:07:28 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.040 01:07:28 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.040 01:07:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:16.040 01:07:28 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.040 01:07:28 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:16.040 01:07:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.040 Malloc0 00:06:16.040 01:07:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.300 Malloc1 00:06:16.300 01:07:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.300 01:07:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.300 01:07:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.300 01:07:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:16.300 01:07:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.300 01:07:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:16.300 01:07:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.300 01:07:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.300 01:07:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.300 01:07:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:16.300 01:07:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.300 01:07:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:16.300 01:07:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:16.300 01:07:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:16.300 01:07:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.300 01:07:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.300 /dev/nbd0 00:06:16.560 01:07:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.560 01:07:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.560 1+0 records in 00:06:16.560 1+0 records out 00:06:16.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328229 s, 12.5 MB/s 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:16.560 01:07:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.560 01:07:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.560 01:07:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.560 /dev/nbd1 00:06:16.560 01:07:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.560 01:07:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:16.560 01:07:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.560 1+0 records in 00:06:16.560 1+0 records out 00:06:16.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374223 s, 10.9 MB/s 00:06:16.820 01:07:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.820 01:07:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:16.820 01:07:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.820 01:07:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:16.820 01:07:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.820 { 00:06:16.820 "nbd_device": "/dev/nbd0", 00:06:16.820 "bdev_name": "Malloc0" 00:06:16.820 }, 00:06:16.820 { 00:06:16.820 "nbd_device": "/dev/nbd1", 00:06:16.820 "bdev_name": "Malloc1" 00:06:16.820 } 00:06:16.820 ]' 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.820 { 00:06:16.820 "nbd_device": "/dev/nbd0", 00:06:16.820 "bdev_name": "Malloc0" 00:06:16.820 }, 00:06:16.820 { 00:06:16.820 "nbd_device": "/dev/nbd1", 00:06:16.820 "bdev_name": "Malloc1" 00:06:16.820 } 00:06:16.820 ]' 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.820 /dev/nbd1' 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.820 /dev/nbd1' 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.820 01:07:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.080 256+0 records in 00:06:17.080 256+0 records out 00:06:17.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133241 s, 78.7 MB/s 00:06:17.080 01:07:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.080 01:07:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.080 256+0 records in 00:06:17.080 256+0 records out 00:06:17.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193403 s, 54.2 MB/s 00:06:17.080 01:07:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.081 01:07:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.081 256+0 records in 00:06:17.081 256+0 records out 00:06:17.081 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238825 s, 43.9 MB/s 00:06:17.081 01:07:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.081 01:07:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.081 01:07:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.081 01:07:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.081 01:07:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.081 01:07:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.081 01:07:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.081 01:07:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.081 01:07:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.081 01:07:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.081 01:07:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.081 01:07:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.081 01:07:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.081 01:07:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.081 01:07:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.081 01:07:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.081 01:07:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:17.081 01:07:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.081 01:07:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.341 01:07:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.341 01:07:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.341 01:07:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.341 01:07:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.341 01:07:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.341 01:07:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.341 01:07:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.341 01:07:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.341 01:07:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.341 01:07:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.341 01:07:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.341 01:07:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.341 01:07:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.341 01:07:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.341 01:07:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.341 01:07:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.341 01:07:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.341 01:07:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.341 01:07:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.341 01:07:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.341 01:07:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.600 01:07:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.600 01:07:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.600 01:07:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.600 01:07:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.600 01:07:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.600 01:07:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.600 01:07:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:17.600 01:07:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.600 01:07:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.600 01:07:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.600 01:07:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.600 01:07:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.600 01:07:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.860 01:07:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:18.120 [2024-10-15 01:07:30.650634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.120 [2024-10-15 01:07:30.674566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.120 [2024-10-15 01:07:30.674569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.120 [2024-10-15 01:07:30.716553] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.120 [2024-10-15 01:07:30.716629] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.415 01:07:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70108 /var/tmp/spdk-nbd.sock 00:06:21.415 01:07:33 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70108 ']' 00:06:21.415 01:07:33 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.415 01:07:33 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.415 01:07:33 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.415 01:07:33 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.415 01:07:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.415 01:07:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.415 01:07:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:21.415 01:07:33 event.app_repeat -- event/event.sh@39 -- # killprocess 70108 00:06:21.415 01:07:33 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70108 ']' 00:06:21.415 01:07:33 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70108 00:06:21.415 01:07:33 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:21.415 01:07:33 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.415 01:07:33 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70108 00:06:21.415 01:07:33 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.415 01:07:33 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.415 01:07:33 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70108' 00:06:21.415 killing process with pid 70108 00:06:21.415 01:07:33 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70108 00:06:21.415 01:07:33 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70108 00:06:21.415 spdk_app_start is called in Round 0. 00:06:21.415 Shutdown signal received, stop current app iteration 00:06:21.415 Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 reinitialization... 00:06:21.415 spdk_app_start is called in Round 1. 00:06:21.415 Shutdown signal received, stop current app iteration 00:06:21.415 Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 reinitialization... 00:06:21.415 spdk_app_start is called in Round 2. 00:06:21.415 Shutdown signal received, stop current app iteration 00:06:21.415 Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 reinitialization... 00:06:21.415 spdk_app_start is called in Round 3. 00:06:21.415 Shutdown signal received, stop current app iteration 00:06:21.415 01:07:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:21.415 01:07:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:21.415 00:06:21.415 real 0m17.193s 00:06:21.415 user 0m37.965s 00:06:21.415 sys 0m2.617s 00:06:21.415 01:07:33 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.415 01:07:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.415 ************************************ 00:06:21.415 END TEST app_repeat 00:06:21.415 ************************************ 00:06:21.415 01:07:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:21.415 01:07:33 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:21.415 01:07:33 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.415 01:07:33 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.415 01:07:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.415 ************************************ 00:06:21.415 START TEST cpu_locks 00:06:21.415 ************************************ 00:06:21.415 01:07:34 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:21.415 * Looking for test storage... 00:06:21.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:21.415 01:07:34 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:21.415 01:07:34 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:21.415 01:07:34 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:21.676 01:07:34 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.676 01:07:34 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:21.676 01:07:34 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.676 01:07:34 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:21.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.676 --rc genhtml_branch_coverage=1 00:06:21.676 --rc genhtml_function_coverage=1 00:06:21.676 --rc genhtml_legend=1 00:06:21.676 --rc geninfo_all_blocks=1 00:06:21.676 --rc geninfo_unexecuted_blocks=1 00:06:21.676 00:06:21.676 ' 00:06:21.676 01:07:34 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:21.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.676 --rc genhtml_branch_coverage=1 00:06:21.676 --rc genhtml_function_coverage=1 00:06:21.676 --rc genhtml_legend=1 00:06:21.676 --rc geninfo_all_blocks=1 00:06:21.676 --rc geninfo_unexecuted_blocks=1 00:06:21.676 00:06:21.676 ' 00:06:21.676 01:07:34 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:21.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.676 --rc genhtml_branch_coverage=1 00:06:21.676 --rc genhtml_function_coverage=1 00:06:21.676 --rc genhtml_legend=1 00:06:21.676 --rc geninfo_all_blocks=1 00:06:21.676 --rc geninfo_unexecuted_blocks=1 00:06:21.676 00:06:21.676 ' 00:06:21.676 01:07:34 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:21.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.676 --rc genhtml_branch_coverage=1 00:06:21.676 --rc genhtml_function_coverage=1 00:06:21.676 --rc genhtml_legend=1 00:06:21.676 --rc geninfo_all_blocks=1 00:06:21.676 --rc geninfo_unexecuted_blocks=1 00:06:21.676 00:06:21.676 ' 00:06:21.676 01:07:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:21.676 01:07:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:21.676 01:07:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:21.676 01:07:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:21.676 01:07:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.676 01:07:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.676 01:07:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.676 ************************************ 00:06:21.676 START TEST default_locks 00:06:21.676 ************************************ 00:06:21.676 01:07:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:21.676 01:07:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70533 00:06:21.676 01:07:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.676 01:07:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70533 00:06:21.676 01:07:34 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70533 ']' 00:06:21.676 01:07:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.676 01:07:34 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.676 01:07:34 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.676 01:07:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.676 01:07:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.676 [2024-10-15 01:07:34.327723] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:21.676 [2024-10-15 01:07:34.327854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70533 ] 00:06:21.936 [2024-10-15 01:07:34.471044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.936 [2024-10-15 01:07:34.496798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.506 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.506 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:22.506 01:07:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70533 00:06:22.506 01:07:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70533 00:06:22.506 01:07:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.766 01:07:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70533 00:06:22.766 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70533 ']' 00:06:22.766 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70533 00:06:22.766 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:22.766 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.766 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70533 00:06:22.766 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.766 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.766 killing process with pid 70533 00:06:22.766 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70533' 00:06:22.766 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70533 00:06:22.766 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70533 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70533 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70533 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70533 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70533 ']' 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.033 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70533) - No such process 00:06:23.033 ERROR: process (pid: 70533) is no longer running 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:23.033 00:06:23.033 real 0m1.485s 00:06:23.033 user 0m1.419s 00:06:23.033 sys 0m0.508s 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.033 01:07:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.033 ************************************ 00:06:23.033 END TEST default_locks 00:06:23.033 ************************************ 00:06:23.311 01:07:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:23.311 01:07:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.311 01:07:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.311 01:07:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.311 ************************************ 00:06:23.311 START TEST default_locks_via_rpc 00:06:23.311 ************************************ 00:06:23.311 01:07:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:23.311 01:07:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70581 00:06:23.311 01:07:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.311 01:07:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70581 00:06:23.311 01:07:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70581 ']' 00:06:23.311 01:07:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.311 01:07:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.311 01:07:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.311 01:07:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.311 01:07:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.311 [2024-10-15 01:07:35.885399] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:23.311 [2024-10-15 01:07:35.885546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70581 ] 00:06:23.588 [2024-10-15 01:07:36.029150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.588 [2024-10-15 01:07:36.055646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.158 01:07:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.158 01:07:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:24.158 01:07:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:24.158 01:07:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.158 01:07:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.158 01:07:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.158 01:07:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:24.158 01:07:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:24.158 01:07:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:24.158 01:07:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:24.158 01:07:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:24.158 01:07:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.158 01:07:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.158 01:07:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.158 01:07:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70581 00:06:24.158 01:07:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70581 00:06:24.158 01:07:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.416 01:07:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70581 00:06:24.416 01:07:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 70581 ']' 00:06:24.417 01:07:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 70581 00:06:24.417 01:07:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:24.417 01:07:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.417 01:07:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70581 00:06:24.417 01:07:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.417 01:07:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.417 killing process with pid 70581 00:06:24.417 01:07:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70581' 00:06:24.417 01:07:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 70581 00:06:24.417 01:07:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 70581 00:06:24.986 00:06:24.986 real 0m1.696s 00:06:24.986 user 0m1.688s 00:06:24.986 sys 0m0.558s 00:06:24.986 01:07:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.986 01:07:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.986 ************************************ 00:06:24.986 END TEST default_locks_via_rpc 00:06:24.986 ************************************ 00:06:24.986 01:07:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:24.986 01:07:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.986 01:07:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.986 01:07:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.986 ************************************ 00:06:24.986 START TEST non_locking_app_on_locked_coremask 00:06:24.986 ************************************ 00:06:24.986 01:07:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:24.986 01:07:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70627 00:06:24.986 01:07:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.986 01:07:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70627 /var/tmp/spdk.sock 00:06:24.986 01:07:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70627 ']' 00:06:24.986 01:07:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.986 01:07:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.986 01:07:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.986 01:07:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.986 01:07:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.986 [2024-10-15 01:07:37.647280] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:24.986 [2024-10-15 01:07:37.647425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70627 ] 00:06:25.246 [2024-10-15 01:07:37.787398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.246 [2024-10-15 01:07:37.813466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.816 01:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.816 01:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:25.816 01:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:25.816 01:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70643 00:06:25.816 01:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70643 /var/tmp/spdk2.sock 00:06:25.816 01:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70643 ']' 00:06:25.816 01:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.816 01:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.816 01:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.816 01:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.816 01:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.816 [2024-10-15 01:07:38.530220] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:25.816 [2024-10-15 01:07:38.530359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70643 ] 00:06:26.076 [2024-10-15 01:07:38.667096] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.076 [2024-10-15 01:07:38.667156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.076 [2024-10-15 01:07:38.722788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.646 01:07:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.646 01:07:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:26.646 01:07:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70627 00:06:26.646 01:07:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70627 00:06:26.646 01:07:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.215 01:07:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70627 00:06:27.215 01:07:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70627 ']' 00:06:27.215 01:07:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70627 00:06:27.215 01:07:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:27.215 01:07:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.215 01:07:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70627 00:06:27.215 01:07:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.215 01:07:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.215 01:07:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70627' 00:06:27.215 killing process with pid 70627 00:06:27.215 01:07:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70627 00:06:27.215 01:07:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70627 00:06:27.785 01:07:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70643 00:06:27.785 01:07:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70643 ']' 00:06:27.785 01:07:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70643 00:06:27.785 01:07:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:27.785 01:07:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.785 01:07:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70643 00:06:28.045 killing process with pid 70643 00:06:28.045 01:07:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.045 01:07:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.045 01:07:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70643' 00:06:28.045 01:07:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70643 00:06:28.045 01:07:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70643 00:06:28.305 ************************************ 00:06:28.305 END TEST non_locking_app_on_locked_coremask 00:06:28.305 ************************************ 00:06:28.305 00:06:28.305 real 0m3.315s 00:06:28.305 user 0m3.471s 00:06:28.305 sys 0m0.985s 00:06:28.305 01:07:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.305 01:07:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.305 01:07:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:28.305 01:07:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.305 01:07:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.305 01:07:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.305 ************************************ 00:06:28.305 START TEST locking_app_on_unlocked_coremask 00:06:28.305 ************************************ 00:06:28.305 01:07:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:28.305 01:07:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:28.305 01:07:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70710 00:06:28.305 01:07:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70710 /var/tmp/spdk.sock 00:06:28.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.305 01:07:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70710 ']' 00:06:28.305 01:07:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.305 01:07:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.305 01:07:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.305 01:07:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.305 01:07:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.305 [2024-10-15 01:07:41.010004] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:28.305 [2024-10-15 01:07:41.010133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70710 ] 00:06:28.565 [2024-10-15 01:07:41.139419] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.565 [2024-10-15 01:07:41.139487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.565 [2024-10-15 01:07:41.164890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.135 01:07:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.135 01:07:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:29.135 01:07:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:29.135 01:07:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70726 00:06:29.135 01:07:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70726 /var/tmp/spdk2.sock 00:06:29.135 01:07:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70726 ']' 00:06:29.135 01:07:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.135 01:07:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.135 01:07:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.135 01:07:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.135 01:07:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.395 [2024-10-15 01:07:41.909260] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:29.395 [2024-10-15 01:07:41.909513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70726 ] 00:06:29.395 [2024-10-15 01:07:42.043562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.395 [2024-10-15 01:07:42.099432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.334 01:07:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.334 01:07:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:30.334 01:07:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70726 00:06:30.334 01:07:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70726 00:06:30.334 01:07:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.594 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70710 00:06:30.594 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70710 ']' 00:06:30.594 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70710 00:06:30.594 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:30.594 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.594 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70710 00:06:30.594 killing process with pid 70710 00:06:30.594 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.594 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.594 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70710' 00:06:30.594 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70710 00:06:30.594 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70710 00:06:31.533 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70726 00:06:31.533 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70726 ']' 00:06:31.533 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70726 00:06:31.533 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:31.533 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.533 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70726 00:06:31.533 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.533 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.533 killing process with pid 70726 00:06:31.533 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70726' 00:06:31.533 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70726 00:06:31.533 01:07:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70726 00:06:31.794 00:06:31.794 real 0m3.414s 00:06:31.794 user 0m3.608s 00:06:31.794 sys 0m0.988s 00:06:31.794 01:07:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.794 ************************************ 00:06:31.794 END TEST locking_app_on_unlocked_coremask 00:06:31.794 ************************************ 00:06:31.794 01:07:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.794 01:07:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:31.794 01:07:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.794 01:07:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.794 01:07:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.794 ************************************ 00:06:31.794 START TEST locking_app_on_locked_coremask 00:06:31.794 ************************************ 00:06:31.794 01:07:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:31.794 01:07:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70784 00:06:31.794 01:07:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.794 01:07:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 70784 /var/tmp/spdk.sock 00:06:31.794 01:07:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70784 ']' 00:06:31.794 01:07:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.794 01:07:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.794 01:07:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.794 01:07:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.794 01:07:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.794 [2024-10-15 01:07:44.504289] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:31.794 [2024-10-15 01:07:44.504424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70784 ] 00:06:32.054 [2024-10-15 01:07:44.633399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.054 [2024-10-15 01:07:44.660494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.623 01:07:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.623 01:07:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:32.623 01:07:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=70800 00:06:32.623 01:07:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 70800 /var/tmp/spdk2.sock 00:06:32.623 01:07:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:32.623 01:07:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70800 /var/tmp/spdk2.sock 00:06:32.623 01:07:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:32.623 01:07:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:32.623 01:07:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.623 01:07:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:32.623 01:07:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.623 01:07:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 70800 /var/tmp/spdk2.sock 00:06:32.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.623 01:07:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70800 ']' 00:06:32.623 01:07:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.623 01:07:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.623 01:07:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.623 01:07:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.623 01:07:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.883 [2024-10-15 01:07:45.402044] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:32.884 [2024-10-15 01:07:45.402191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70800 ] 00:06:32.884 [2024-10-15 01:07:45.537284] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70784 has claimed it. 00:06:32.884 [2024-10-15 01:07:45.537347] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:33.453 ERROR: process (pid: 70800) is no longer running 00:06:33.453 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70800) - No such process 00:06:33.453 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.453 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:33.453 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:33.453 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:33.453 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:33.453 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:33.453 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 70784 00:06:33.453 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70784 00:06:33.453 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.713 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 70784 00:06:33.713 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70784 ']' 00:06:33.713 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70784 00:06:33.713 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:33.713 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.713 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70784 00:06:33.713 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.713 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.713 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70784' 00:06:33.713 killing process with pid 70784 00:06:33.713 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70784 00:06:33.713 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70784 00:06:34.287 00:06:34.287 real 0m2.285s 00:06:34.287 user 0m2.470s 00:06:34.287 sys 0m0.645s 00:06:34.287 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.287 01:07:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.287 ************************************ 00:06:34.287 END TEST locking_app_on_locked_coremask 00:06:34.287 ************************************ 00:06:34.287 01:07:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:34.287 01:07:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.287 01:07:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.287 01:07:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.287 ************************************ 00:06:34.287 START TEST locking_overlapped_coremask 00:06:34.287 ************************************ 00:06:34.287 01:07:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:34.287 01:07:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=70842 00:06:34.287 01:07:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:34.287 01:07:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 70842 /var/tmp/spdk.sock 00:06:34.287 01:07:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 70842 ']' 00:06:34.288 01:07:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.288 01:07:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.288 01:07:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.288 01:07:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.288 01:07:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.288 [2024-10-15 01:07:46.853046] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:34.288 [2024-10-15 01:07:46.853169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70842 ] 00:06:34.288 [2024-10-15 01:07:46.999703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.555 [2024-10-15 01:07:47.030052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.555 [2024-10-15 01:07:47.030155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.555 [2024-10-15 01:07:47.030290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.125 01:07:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.125 01:07:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:35.125 01:07:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:35.125 01:07:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=70860 00:06:35.125 01:07:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 70860 /var/tmp/spdk2.sock 00:06:35.125 01:07:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:35.125 01:07:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70860 /var/tmp/spdk2.sock 00:06:35.125 01:07:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:35.125 01:07:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.125 01:07:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:35.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.125 01:07:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.125 01:07:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 70860 /var/tmp/spdk2.sock 00:06:35.125 01:07:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 70860 ']' 00:06:35.125 01:07:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.125 01:07:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.125 01:07:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.125 01:07:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.125 01:07:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.125 [2024-10-15 01:07:47.737734] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:35.125 [2024-10-15 01:07:47.737855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70860 ] 00:06:35.385 [2024-10-15 01:07:47.872104] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70842 has claimed it. 00:06:35.385 [2024-10-15 01:07:47.872168] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:35.953 ERROR: process (pid: 70860) is no longer running 00:06:35.953 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70860) - No such process 00:06:35.953 01:07:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.953 01:07:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:35.953 01:07:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:35.953 01:07:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.953 01:07:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:35.953 01:07:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.953 01:07:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:35.953 01:07:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:35.953 01:07:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:35.953 01:07:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:35.953 01:07:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 70842 00:06:35.953 01:07:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 70842 ']' 00:06:35.953 01:07:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 70842 00:06:35.953 01:07:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:35.953 01:07:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.953 01:07:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70842 00:06:35.954 01:07:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.954 01:07:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.954 01:07:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70842' 00:06:35.954 killing process with pid 70842 00:06:35.954 01:07:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 70842 00:06:35.954 01:07:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 70842 00:06:36.213 00:06:36.213 real 0m2.038s 00:06:36.213 user 0m5.513s 00:06:36.213 sys 0m0.491s 00:06:36.213 01:07:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.213 01:07:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.213 ************************************ 00:06:36.213 END TEST locking_overlapped_coremask 00:06:36.213 ************************************ 00:06:36.214 01:07:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:36.214 01:07:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.214 01:07:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.214 01:07:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.214 ************************************ 00:06:36.214 START TEST locking_overlapped_coremask_via_rpc 00:06:36.214 ************************************ 00:06:36.214 01:07:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:36.214 01:07:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70902 00:06:36.214 01:07:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:36.214 01:07:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 70902 /var/tmp/spdk.sock 00:06:36.214 01:07:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70902 ']' 00:06:36.214 01:07:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.214 01:07:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.214 01:07:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.214 01:07:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.214 01:07:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.473 [2024-10-15 01:07:48.956800] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:36.473 [2024-10-15 01:07:48.957026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70902 ] 00:06:36.474 [2024-10-15 01:07:49.105019] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.474 [2024-10-15 01:07:49.105199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.474 [2024-10-15 01:07:49.134834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.474 [2024-10-15 01:07:49.134919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.474 [2024-10-15 01:07:49.135080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.413 01:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.413 01:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:37.413 01:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:37.413 01:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70920 00:06:37.413 01:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 70920 /var/tmp/spdk2.sock 00:06:37.413 01:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70920 ']' 00:06:37.413 01:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.413 01:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.413 01:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.413 01:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.413 01:07:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.413 [2024-10-15 01:07:49.834471] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:37.413 [2024-10-15 01:07:49.834681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70920 ] 00:06:37.413 [2024-10-15 01:07:49.969907] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.413 [2024-10-15 01:07:49.969967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.413 [2024-10-15 01:07:50.033601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.413 [2024-10-15 01:07:50.037273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.413 [2024-10-15 01:07:50.037367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:37.983 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.983 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:37.983 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:37.983 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.983 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.983 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.983 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:37.983 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:37.983 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:37.983 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:37.983 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.983 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:37.983 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.983 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:37.983 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.983 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.983 [2024-10-15 01:07:50.697403] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70902 has claimed it. 00:06:38.244 request: 00:06:38.244 { 00:06:38.244 "method": "framework_enable_cpumask_locks", 00:06:38.244 "req_id": 1 00:06:38.244 } 00:06:38.244 Got JSON-RPC error response 00:06:38.244 response: 00:06:38.244 { 00:06:38.244 "code": -32603, 00:06:38.244 "message": "Failed to claim CPU core: 2" 00:06:38.244 } 00:06:38.244 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:38.244 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:38.244 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:38.244 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:38.244 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:38.244 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 70902 /var/tmp/spdk.sock 00:06:38.244 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70902 ']' 00:06:38.244 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.244 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.244 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.244 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.244 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.244 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.244 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:38.244 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 70920 /var/tmp/spdk2.sock 00:06:38.244 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70920 ']' 00:06:38.244 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.244 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.244 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.244 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.244 01:07:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.504 01:07:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.504 01:07:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:38.504 01:07:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:38.504 01:07:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:38.504 01:07:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:38.504 01:07:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:38.504 00:06:38.504 real 0m2.277s 00:06:38.504 user 0m1.024s 00:06:38.504 sys 0m0.180s 00:06:38.504 01:07:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.504 01:07:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.504 ************************************ 00:06:38.504 END TEST locking_overlapped_coremask_via_rpc 00:06:38.504 ************************************ 00:06:38.504 01:07:51 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:38.504 01:07:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70902 ]] 00:06:38.504 01:07:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70902 00:06:38.504 01:07:51 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 70902 ']' 00:06:38.504 01:07:51 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 70902 00:06:38.504 01:07:51 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:38.504 01:07:51 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.504 01:07:51 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70902 00:06:38.764 01:07:51 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.764 killing process with pid 70902 00:06:38.764 01:07:51 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.764 01:07:51 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70902' 00:06:38.764 01:07:51 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 70902 00:06:38.764 01:07:51 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 70902 00:06:39.335 01:07:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 70920 ]] 00:06:39.335 01:07:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 70920 00:06:39.335 01:07:51 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 70920 ']' 00:06:39.335 01:07:51 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 70920 00:06:39.335 01:07:51 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:39.335 01:07:51 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.335 01:07:51 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70920 00:06:39.335 killing process with pid 70920 00:06:39.335 01:07:51 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:39.335 01:07:51 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:39.335 01:07:51 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70920' 00:06:39.335 01:07:51 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 70920 00:06:39.335 01:07:51 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 70920 00:06:39.595 01:07:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.595 01:07:52 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:39.595 01:07:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70902 ]] 00:06:39.595 01:07:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70902 00:06:39.595 Process with pid 70902 is not found 00:06:39.595 Process with pid 70920 is not found 00:06:39.595 01:07:52 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 70902 ']' 00:06:39.595 01:07:52 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 70902 00:06:39.595 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (70902) - No such process 00:06:39.595 01:07:52 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 70902 is not found' 00:06:39.595 01:07:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 70920 ]] 00:06:39.595 01:07:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 70920 00:06:39.595 01:07:52 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 70920 ']' 00:06:39.595 01:07:52 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 70920 00:06:39.595 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (70920) - No such process 00:06:39.595 01:07:52 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 70920 is not found' 00:06:39.595 01:07:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.595 00:06:39.595 real 0m18.265s 00:06:39.595 user 0m31.752s 00:06:39.595 sys 0m5.425s 00:06:39.595 01:07:52 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.595 01:07:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.595 ************************************ 00:06:39.595 END TEST cpu_locks 00:06:39.595 ************************************ 00:06:39.855 00:06:39.855 real 0m44.436s 00:06:39.855 user 1m24.401s 00:06:39.855 sys 0m9.112s 00:06:39.855 01:07:52 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.855 01:07:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.855 ************************************ 00:06:39.855 END TEST event 00:06:39.855 ************************************ 00:06:39.855 01:07:52 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:39.855 01:07:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.855 01:07:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.855 01:07:52 -- common/autotest_common.sh@10 -- # set +x 00:06:39.855 ************************************ 00:06:39.855 START TEST thread 00:06:39.855 ************************************ 00:06:39.855 01:07:52 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:39.855 * Looking for test storage... 00:06:39.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:39.855 01:07:52 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:39.855 01:07:52 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:39.855 01:07:52 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:40.115 01:07:52 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:40.115 01:07:52 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.115 01:07:52 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.115 01:07:52 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.115 01:07:52 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.115 01:07:52 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.115 01:07:52 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.115 01:07:52 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.115 01:07:52 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.115 01:07:52 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.115 01:07:52 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.115 01:07:52 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.115 01:07:52 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:40.115 01:07:52 thread -- scripts/common.sh@345 -- # : 1 00:06:40.115 01:07:52 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.115 01:07:52 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.115 01:07:52 thread -- scripts/common.sh@365 -- # decimal 1 00:06:40.115 01:07:52 thread -- scripts/common.sh@353 -- # local d=1 00:06:40.115 01:07:52 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.115 01:07:52 thread -- scripts/common.sh@355 -- # echo 1 00:06:40.115 01:07:52 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.115 01:07:52 thread -- scripts/common.sh@366 -- # decimal 2 00:06:40.115 01:07:52 thread -- scripts/common.sh@353 -- # local d=2 00:06:40.115 01:07:52 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.115 01:07:52 thread -- scripts/common.sh@355 -- # echo 2 00:06:40.115 01:07:52 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.115 01:07:52 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.115 01:07:52 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.115 01:07:52 thread -- scripts/common.sh@368 -- # return 0 00:06:40.115 01:07:52 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.115 01:07:52 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:40.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.115 --rc genhtml_branch_coverage=1 00:06:40.115 --rc genhtml_function_coverage=1 00:06:40.115 --rc genhtml_legend=1 00:06:40.115 --rc geninfo_all_blocks=1 00:06:40.115 --rc geninfo_unexecuted_blocks=1 00:06:40.115 00:06:40.115 ' 00:06:40.115 01:07:52 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:40.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.115 --rc genhtml_branch_coverage=1 00:06:40.115 --rc genhtml_function_coverage=1 00:06:40.115 --rc genhtml_legend=1 00:06:40.115 --rc geninfo_all_blocks=1 00:06:40.115 --rc geninfo_unexecuted_blocks=1 00:06:40.115 00:06:40.115 ' 00:06:40.115 01:07:52 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:40.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.115 --rc genhtml_branch_coverage=1 00:06:40.115 --rc genhtml_function_coverage=1 00:06:40.115 --rc genhtml_legend=1 00:06:40.115 --rc geninfo_all_blocks=1 00:06:40.115 --rc geninfo_unexecuted_blocks=1 00:06:40.115 00:06:40.115 ' 00:06:40.115 01:07:52 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:40.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.115 --rc genhtml_branch_coverage=1 00:06:40.115 --rc genhtml_function_coverage=1 00:06:40.115 --rc genhtml_legend=1 00:06:40.115 --rc geninfo_all_blocks=1 00:06:40.115 --rc geninfo_unexecuted_blocks=1 00:06:40.115 00:06:40.115 ' 00:06:40.115 01:07:52 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:40.115 01:07:52 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:40.115 01:07:52 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.115 01:07:52 thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.115 ************************************ 00:06:40.115 START TEST thread_poller_perf 00:06:40.115 ************************************ 00:06:40.115 01:07:52 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:40.115 [2024-10-15 01:07:52.665973] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:40.115 [2024-10-15 01:07:52.666090] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71058 ] 00:06:40.115 [2024-10-15 01:07:52.811016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.375 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:40.375 [2024-10-15 01:07:52.853612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.313 [2024-10-15T01:07:54.037Z] ====================================== 00:06:41.313 [2024-10-15T01:07:54.037Z] busy:2302608676 (cyc) 00:06:41.313 [2024-10-15T01:07:54.037Z] total_run_count: 391000 00:06:41.313 [2024-10-15T01:07:54.037Z] tsc_hz: 2290000000 (cyc) 00:06:41.313 [2024-10-15T01:07:54.037Z] ====================================== 00:06:41.313 [2024-10-15T01:07:54.037Z] poller_cost: 5889 (cyc), 2571 (nsec) 00:06:41.313 ************************************ 00:06:41.313 END TEST thread_poller_perf 00:06:41.313 ************************************ 00:06:41.313 00:06:41.313 real 0m1.318s 00:06:41.313 user 0m1.129s 00:06:41.313 sys 0m0.083s 00:06:41.313 01:07:53 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.313 01:07:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:41.313 01:07:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:41.313 01:07:53 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:41.313 01:07:53 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.313 01:07:53 thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.313 ************************************ 00:06:41.313 START TEST thread_poller_perf 00:06:41.313 ************************************ 00:06:41.313 01:07:54 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:41.573 [2024-10-15 01:07:54.050695] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:41.573 [2024-10-15 01:07:54.050875] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71089 ] 00:06:41.573 [2024-10-15 01:07:54.196054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.573 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:41.573 [2024-10-15 01:07:54.241252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.953 [2024-10-15T01:07:55.677Z] ====================================== 00:06:42.953 [2024-10-15T01:07:55.677Z] busy:2293787938 (cyc) 00:06:42.953 [2024-10-15T01:07:55.677Z] total_run_count: 5291000 00:06:42.953 [2024-10-15T01:07:55.677Z] tsc_hz: 2290000000 (cyc) 00:06:42.953 [2024-10-15T01:07:55.677Z] ====================================== 00:06:42.953 [2024-10-15T01:07:55.677Z] poller_cost: 433 (cyc), 189 (nsec) 00:06:42.953 00:06:42.953 real 0m1.314s 00:06:42.953 user 0m1.123s 00:06:42.953 sys 0m0.085s 00:06:42.953 01:07:55 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.953 ************************************ 00:06:42.953 END TEST thread_poller_perf 00:06:42.953 ************************************ 00:06:42.954 01:07:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:42.954 01:07:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:42.954 ************************************ 00:06:42.954 END TEST thread 00:06:42.954 ************************************ 00:06:42.954 00:06:42.954 real 0m2.991s 00:06:42.954 user 0m2.406s 00:06:42.954 sys 0m0.388s 00:06:42.954 01:07:55 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.954 01:07:55 thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.954 01:07:55 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:42.954 01:07:55 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:42.954 01:07:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.954 01:07:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.954 01:07:55 -- common/autotest_common.sh@10 -- # set +x 00:06:42.954 ************************************ 00:06:42.954 START TEST app_cmdline 00:06:42.954 ************************************ 00:06:42.954 01:07:55 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:42.954 * Looking for test storage... 00:06:42.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:42.954 01:07:55 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:42.954 01:07:55 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:42.954 01:07:55 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:42.954 01:07:55 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.954 01:07:55 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:42.954 01:07:55 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.954 01:07:55 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:42.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.954 --rc genhtml_branch_coverage=1 00:06:42.954 --rc genhtml_function_coverage=1 00:06:42.954 --rc genhtml_legend=1 00:06:42.954 --rc geninfo_all_blocks=1 00:06:42.954 --rc geninfo_unexecuted_blocks=1 00:06:42.954 00:06:42.954 ' 00:06:42.954 01:07:55 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:42.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.954 --rc genhtml_branch_coverage=1 00:06:42.954 --rc genhtml_function_coverage=1 00:06:42.954 --rc genhtml_legend=1 00:06:42.954 --rc geninfo_all_blocks=1 00:06:42.954 --rc geninfo_unexecuted_blocks=1 00:06:42.954 00:06:42.954 ' 00:06:42.954 01:07:55 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:42.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.954 --rc genhtml_branch_coverage=1 00:06:42.954 --rc genhtml_function_coverage=1 00:06:42.954 --rc genhtml_legend=1 00:06:42.954 --rc geninfo_all_blocks=1 00:06:42.954 --rc geninfo_unexecuted_blocks=1 00:06:42.954 00:06:42.954 ' 00:06:42.954 01:07:55 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:42.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.954 --rc genhtml_branch_coverage=1 00:06:42.954 --rc genhtml_function_coverage=1 00:06:42.954 --rc genhtml_legend=1 00:06:42.954 --rc geninfo_all_blocks=1 00:06:42.954 --rc geninfo_unexecuted_blocks=1 00:06:42.954 00:06:42.954 ' 00:06:42.954 01:07:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:42.954 01:07:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71178 00:06:42.954 01:07:55 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:42.954 01:07:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71178 00:06:42.954 01:07:55 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71178 ']' 00:06:42.954 01:07:55 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.954 01:07:55 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.954 01:07:55 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.954 01:07:55 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.954 01:07:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:43.214 [2024-10-15 01:07:55.753116] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:43.214 [2024-10-15 01:07:55.753380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71178 ] 00:06:43.214 [2024-10-15 01:07:55.897914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.473 [2024-10-15 01:07:55.937978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.042 01:07:56 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.042 01:07:56 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:44.042 01:07:56 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:44.042 { 00:06:44.042 "version": "SPDK v25.01-pre git sha1 3a02df0b1", 00:06:44.042 "fields": { 00:06:44.042 "major": 25, 00:06:44.042 "minor": 1, 00:06:44.042 "patch": 0, 00:06:44.042 "suffix": "-pre", 00:06:44.042 "commit": "3a02df0b1" 00:06:44.042 } 00:06:44.042 } 00:06:44.042 01:07:56 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:44.042 01:07:56 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:44.042 01:07:56 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:44.042 01:07:56 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:44.042 01:07:56 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:44.042 01:07:56 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.042 01:07:56 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:44.043 01:07:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:44.043 01:07:56 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:44.303 01:07:56 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.303 01:07:56 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:44.303 01:07:56 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:44.303 01:07:56 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:44.303 01:07:56 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:44.303 01:07:56 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:44.303 01:07:56 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:44.303 01:07:56 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.303 01:07:56 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:44.303 01:07:56 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.303 01:07:56 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:44.303 01:07:56 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.303 01:07:56 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:44.303 01:07:56 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:44.303 01:07:56 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:44.303 request: 00:06:44.303 { 00:06:44.303 "method": "env_dpdk_get_mem_stats", 00:06:44.303 "req_id": 1 00:06:44.303 } 00:06:44.303 Got JSON-RPC error response 00:06:44.303 response: 00:06:44.303 { 00:06:44.303 "code": -32601, 00:06:44.303 "message": "Method not found" 00:06:44.303 } 00:06:44.303 01:07:57 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:44.303 01:07:57 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:44.303 01:07:57 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:44.303 01:07:57 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:44.303 01:07:57 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71178 00:06:44.303 01:07:57 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71178 ']' 00:06:44.303 01:07:57 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71178 00:06:44.303 01:07:57 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:44.303 01:07:57 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.563 01:07:57 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71178 00:06:44.563 killing process with pid 71178 00:06:44.563 01:07:57 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.563 01:07:57 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.563 01:07:57 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71178' 00:06:44.563 01:07:57 app_cmdline -- common/autotest_common.sh@969 -- # kill 71178 00:06:44.563 01:07:57 app_cmdline -- common/autotest_common.sh@974 -- # wait 71178 00:06:45.132 00:06:45.132 real 0m2.222s 00:06:45.132 user 0m2.319s 00:06:45.132 sys 0m0.672s 00:06:45.132 01:07:57 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.132 01:07:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.132 ************************************ 00:06:45.132 END TEST app_cmdline 00:06:45.132 ************************************ 00:06:45.132 01:07:57 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:45.132 01:07:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.132 01:07:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.132 01:07:57 -- common/autotest_common.sh@10 -- # set +x 00:06:45.132 ************************************ 00:06:45.132 START TEST version 00:06:45.132 ************************************ 00:06:45.132 01:07:57 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:45.132 * Looking for test storage... 00:06:45.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:45.399 01:07:57 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:45.399 01:07:57 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:45.399 01:07:57 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:45.399 01:07:57 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:45.399 01:07:57 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.399 01:07:57 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.399 01:07:57 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.399 01:07:57 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.399 01:07:57 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.399 01:07:57 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.399 01:07:57 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.399 01:07:57 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.399 01:07:57 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.399 01:07:57 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.399 01:07:57 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.399 01:07:57 version -- scripts/common.sh@344 -- # case "$op" in 00:06:45.399 01:07:57 version -- scripts/common.sh@345 -- # : 1 00:06:45.399 01:07:57 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.399 01:07:57 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.399 01:07:57 version -- scripts/common.sh@365 -- # decimal 1 00:06:45.399 01:07:57 version -- scripts/common.sh@353 -- # local d=1 00:06:45.399 01:07:57 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.399 01:07:57 version -- scripts/common.sh@355 -- # echo 1 00:06:45.399 01:07:57 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.399 01:07:57 version -- scripts/common.sh@366 -- # decimal 2 00:06:45.399 01:07:57 version -- scripts/common.sh@353 -- # local d=2 00:06:45.399 01:07:57 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.399 01:07:57 version -- scripts/common.sh@355 -- # echo 2 00:06:45.399 01:07:57 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.399 01:07:57 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.399 01:07:57 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.399 01:07:57 version -- scripts/common.sh@368 -- # return 0 00:06:45.400 01:07:57 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.400 01:07:57 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:45.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.400 --rc genhtml_branch_coverage=1 00:06:45.400 --rc genhtml_function_coverage=1 00:06:45.400 --rc genhtml_legend=1 00:06:45.400 --rc geninfo_all_blocks=1 00:06:45.400 --rc geninfo_unexecuted_blocks=1 00:06:45.400 00:06:45.400 ' 00:06:45.400 01:07:57 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:45.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.400 --rc genhtml_branch_coverage=1 00:06:45.400 --rc genhtml_function_coverage=1 00:06:45.400 --rc genhtml_legend=1 00:06:45.400 --rc geninfo_all_blocks=1 00:06:45.400 --rc geninfo_unexecuted_blocks=1 00:06:45.400 00:06:45.400 ' 00:06:45.400 01:07:57 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:45.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.400 --rc genhtml_branch_coverage=1 00:06:45.400 --rc genhtml_function_coverage=1 00:06:45.400 --rc genhtml_legend=1 00:06:45.400 --rc geninfo_all_blocks=1 00:06:45.400 --rc geninfo_unexecuted_blocks=1 00:06:45.400 00:06:45.400 ' 00:06:45.400 01:07:57 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:45.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.400 --rc genhtml_branch_coverage=1 00:06:45.400 --rc genhtml_function_coverage=1 00:06:45.400 --rc genhtml_legend=1 00:06:45.400 --rc geninfo_all_blocks=1 00:06:45.400 --rc geninfo_unexecuted_blocks=1 00:06:45.401 00:06:45.401 ' 00:06:45.401 01:07:57 version -- app/version.sh@17 -- # get_header_version major 00:06:45.401 01:07:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:45.401 01:07:57 version -- app/version.sh@14 -- # cut -f2 00:06:45.401 01:07:57 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.401 01:07:57 version -- app/version.sh@17 -- # major=25 00:06:45.401 01:07:57 version -- app/version.sh@18 -- # get_header_version minor 00:06:45.401 01:07:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:45.401 01:07:57 version -- app/version.sh@14 -- # cut -f2 00:06:45.401 01:07:57 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.401 01:07:57 version -- app/version.sh@18 -- # minor=1 00:06:45.401 01:07:57 version -- app/version.sh@19 -- # get_header_version patch 00:06:45.401 01:07:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:45.401 01:07:57 version -- app/version.sh@14 -- # cut -f2 00:06:45.401 01:07:57 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.401 01:07:57 version -- app/version.sh@19 -- # patch=0 00:06:45.401 01:07:57 version -- app/version.sh@20 -- # get_header_version suffix 00:06:45.401 01:07:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:45.401 01:07:57 version -- app/version.sh@14 -- # cut -f2 00:06:45.401 01:07:57 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.401 01:07:58 version -- app/version.sh@20 -- # suffix=-pre 00:06:45.401 01:07:58 version -- app/version.sh@22 -- # version=25.1 00:06:45.401 01:07:58 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:45.401 01:07:58 version -- app/version.sh@28 -- # version=25.1rc0 00:06:45.401 01:07:58 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:45.401 01:07:58 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:45.401 01:07:58 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:45.401 01:07:58 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:45.401 00:06:45.401 real 0m0.325s 00:06:45.401 user 0m0.204s 00:06:45.401 sys 0m0.179s 00:06:45.401 01:07:58 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.401 01:07:58 version -- common/autotest_common.sh@10 -- # set +x 00:06:45.402 ************************************ 00:06:45.402 END TEST version 00:06:45.402 ************************************ 00:06:45.402 01:07:58 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:45.402 01:07:58 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:45.402 01:07:58 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:45.402 01:07:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.402 01:07:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.402 01:07:58 -- common/autotest_common.sh@10 -- # set +x 00:06:45.678 ************************************ 00:06:45.678 START TEST bdev_raid 00:06:45.678 ************************************ 00:06:45.678 01:07:58 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:45.678 * Looking for test storage... 00:06:45.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:45.678 01:07:58 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:45.678 01:07:58 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:06:45.678 01:07:58 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:45.678 01:07:58 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.678 01:07:58 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:45.678 01:07:58 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.678 01:07:58 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:45.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.678 --rc genhtml_branch_coverage=1 00:06:45.678 --rc genhtml_function_coverage=1 00:06:45.678 --rc genhtml_legend=1 00:06:45.678 --rc geninfo_all_blocks=1 00:06:45.678 --rc geninfo_unexecuted_blocks=1 00:06:45.678 00:06:45.678 ' 00:06:45.678 01:07:58 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:45.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.678 --rc genhtml_branch_coverage=1 00:06:45.678 --rc genhtml_function_coverage=1 00:06:45.678 --rc genhtml_legend=1 00:06:45.678 --rc geninfo_all_blocks=1 00:06:45.678 --rc geninfo_unexecuted_blocks=1 00:06:45.678 00:06:45.678 ' 00:06:45.678 01:07:58 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:45.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.678 --rc genhtml_branch_coverage=1 00:06:45.678 --rc genhtml_function_coverage=1 00:06:45.678 --rc genhtml_legend=1 00:06:45.678 --rc geninfo_all_blocks=1 00:06:45.678 --rc geninfo_unexecuted_blocks=1 00:06:45.678 00:06:45.678 ' 00:06:45.678 01:07:58 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:45.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.678 --rc genhtml_branch_coverage=1 00:06:45.678 --rc genhtml_function_coverage=1 00:06:45.678 --rc genhtml_legend=1 00:06:45.678 --rc geninfo_all_blocks=1 00:06:45.678 --rc geninfo_unexecuted_blocks=1 00:06:45.678 00:06:45.678 ' 00:06:45.678 01:07:58 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:45.678 01:07:58 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:45.678 01:07:58 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:45.678 01:07:58 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:45.678 01:07:58 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:45.678 01:07:58 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:45.678 01:07:58 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:45.678 01:07:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.678 01:07:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.678 01:07:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:45.678 ************************************ 00:06:45.678 START TEST raid1_resize_data_offset_test 00:06:45.678 ************************************ 00:06:45.678 Process raid pid: 71346 00:06:45.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.678 01:07:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:06:45.678 01:07:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71346 00:06:45.678 01:07:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71346' 00:06:45.678 01:07:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71346 00:06:45.678 01:07:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 71346 ']' 00:06:45.678 01:07:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:45.678 01:07:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.678 01:07:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.678 01:07:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.678 01:07:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.678 01:07:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.941 [2024-10-15 01:07:58.453200] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:45.941 [2024-10-15 01:07:58.453374] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.941 [2024-10-15 01:07:58.601649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.941 [2024-10-15 01:07:58.642342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.201 [2024-10-15 01:07:58.719728] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.201 [2024-10-15 01:07:58.719875] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.772 malloc0 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.772 malloc1 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.772 null0 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.772 [2024-10-15 01:07:59.372974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:46.772 [2024-10-15 01:07:59.375225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:46.772 [2024-10-15 01:07:59.375323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:46.772 [2024-10-15 01:07:59.375500] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:46.772 [2024-10-15 01:07:59.375551] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:46.772 [2024-10-15 01:07:59.375886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:06:46.772 [2024-10-15 01:07:59.376074] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:46.772 [2024-10-15 01:07:59.376126] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:46.772 [2024-10-15 01:07:59.376351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.772 [2024-10-15 01:07:59.432874] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.772 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.032 malloc2 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.032 [2024-10-15 01:07:59.646591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:47.032 [2024-10-15 01:07:59.655927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.032 [2024-10-15 01:07:59.658300] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71346 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 71346 ']' 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 71346 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71346 00:06:47.032 killing process with pid 71346 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71346' 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 71346 00:06:47.032 01:07:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 71346 00:06:47.032 [2024-10-15 01:07:59.732138] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:47.032 [2024-10-15 01:07:59.734067] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:47.032 [2024-10-15 01:07:59.734291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.032 [2024-10-15 01:07:59.734319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:47.032 [2024-10-15 01:07:59.745349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:47.032 [2024-10-15 01:07:59.745795] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:47.032 [2024-10-15 01:07:59.745822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:47.601 [2024-10-15 01:08:00.142437] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:47.861 01:08:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:47.861 00:06:47.861 real 0m2.079s 00:06:47.861 user 0m1.891s 00:06:47.861 sys 0m0.599s 00:06:47.861 01:08:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.861 01:08:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.861 ************************************ 00:06:47.861 END TEST raid1_resize_data_offset_test 00:06:47.861 ************************************ 00:06:47.861 01:08:00 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:47.861 01:08:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:47.861 01:08:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.861 01:08:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:47.861 ************************************ 00:06:47.861 START TEST raid0_resize_superblock_test 00:06:47.861 ************************************ 00:06:47.861 01:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:06:47.861 Process raid pid: 71402 00:06:47.861 01:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:47.861 01:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71402 00:06:47.861 01:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:47.861 01:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71402' 00:06:47.861 01:08:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71402 00:06:47.861 01:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71402 ']' 00:06:47.861 01:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.861 01:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.861 01:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.861 01:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.861 01:08:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.121 [2024-10-15 01:08:00.602204] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:48.121 [2024-10-15 01:08:00.602379] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.121 [2024-10-15 01:08:00.748969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.121 [2024-10-15 01:08:00.791417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.381 [2024-10-15 01:08:00.868605] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:48.381 [2024-10-15 01:08:00.868751] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:48.949 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.949 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:48.949 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:48.950 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.950 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.950 malloc0 00:06:48.950 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.950 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:48.950 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.950 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.950 [2024-10-15 01:08:01.632432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:48.950 [2024-10-15 01:08:01.632613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:48.950 [2024-10-15 01:08:01.632672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:48.950 [2024-10-15 01:08:01.632722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:48.950 [2024-10-15 01:08:01.635244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:48.950 [2024-10-15 01:08:01.635334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:48.950 pt0 00:06:48.950 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.950 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:48.950 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.950 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.209 fcee9836-635d-4ca3-ad97-e3e31be47163 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.209 9540ccaa-430b-44ea-a437-63ee913aef4e 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.209 a0975788-5074-4bd2-8312-de993bde8a61 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.209 [2024-10-15 01:08:01.841068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9540ccaa-430b-44ea-a437-63ee913aef4e is claimed 00:06:49.209 [2024-10-15 01:08:01.841198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a0975788-5074-4bd2-8312-de993bde8a61 is claimed 00:06:49.209 [2024-10-15 01:08:01.841358] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:49.209 [2024-10-15 01:08:01.841379] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:49.209 [2024-10-15 01:08:01.841672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:49.209 [2024-10-15 01:08:01.841848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:49.209 [2024-10-15 01:08:01.841859] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:49.209 [2024-10-15 01:08:01.842003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.209 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.469 [2024-10-15 01:08:01.933120] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:49.469 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.469 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:49.469 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:49.469 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:49.469 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:49.469 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.469 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.469 [2024-10-15 01:08:01.977003] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:49.469 [2024-10-15 01:08:01.977035] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9540ccaa-430b-44ea-a437-63ee913aef4e' was resized: old size 131072, new size 204800 00:06:49.470 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.470 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:49.470 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.470 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.470 [2024-10-15 01:08:01.988881] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:49.470 [2024-10-15 01:08:01.988909] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'a0975788-5074-4bd2-8312-de993bde8a61' was resized: old size 131072, new size 204800 00:06:49.470 [2024-10-15 01:08:01.988941] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:49.470 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.470 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:49.470 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.470 01:08:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:49.470 01:08:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.470 [2024-10-15 01:08:02.096805] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.470 [2024-10-15 01:08:02.144508] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:49.470 [2024-10-15 01:08:02.144585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:49.470 [2024-10-15 01:08:02.144606] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:49.470 [2024-10-15 01:08:02.144621] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:49.470 [2024-10-15 01:08:02.144796] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:49.470 [2024-10-15 01:08:02.144852] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:49.470 [2024-10-15 01:08:02.144868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.470 [2024-10-15 01:08:02.156434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:49.470 [2024-10-15 01:08:02.156500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:49.470 [2024-10-15 01:08:02.156524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:49.470 [2024-10-15 01:08:02.156539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:49.470 [2024-10-15 01:08:02.159080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:49.470 [2024-10-15 01:08:02.159123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:49.470 [2024-10-15 01:08:02.160806] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9540ccaa-430b-44ea-a437-63ee913aef4e 00:06:49.470 [2024-10-15 01:08:02.160903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9540ccaa-430b-44ea-a437-63ee913aef4e is claimed 00:06:49.470 [2024-10-15 01:08:02.160998] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev a0975788-5074-4bd2-8312-de993bde8a61 00:06:49.470 [2024-10-15 01:08:02.161024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a0975788-5074-4bd2-8312-de993bde8a61 is claimed 00:06:49.470 [2024-10-15 01:08:02.161164] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev a0975788-5074-4bd2-8312-de993bde8a61 (2) smaller than existing raid bdev Raid (3) 00:06:49.470 [2024-10-15 01:08:02.161206] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 9540ccaa-430b-44ea-a437-63ee913aef4e: File exists 00:06:49.470 [2024-10-15 01:08:02.161248] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:06:49.470 [2024-10-15 01:08:02.161260] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:49.470 [2024-10-15 01:08:02.161529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:06:49.470 pt0 00:06:49.470 [2024-10-15 01:08:02.161716] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:06:49.470 [2024-10-15 01:08:02.161739] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:06:49.470 [2024-10-15 01:08:02.161871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.470 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.470 [2024-10-15 01:08:02.185089] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:49.730 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.730 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:49.730 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:49.730 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:49.730 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71402 00:06:49.730 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71402 ']' 00:06:49.730 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71402 00:06:49.730 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:49.730 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.730 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71402 00:06:49.730 killing process with pid 71402 00:06:49.730 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.730 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.730 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71402' 00:06:49.730 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71402 00:06:49.730 [2024-10-15 01:08:02.263631] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:49.730 [2024-10-15 01:08:02.263704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:49.730 [2024-10-15 01:08:02.263752] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:49.730 [2024-10-15 01:08:02.263763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:06:49.730 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71402 00:06:49.989 [2024-10-15 01:08:02.571489] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:50.249 01:08:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:50.249 00:06:50.249 real 0m2.371s 00:06:50.249 user 0m2.495s 00:06:50.249 sys 0m0.630s 00:06:50.249 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.249 01:08:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.249 ************************************ 00:06:50.249 END TEST raid0_resize_superblock_test 00:06:50.249 ************************************ 00:06:50.249 01:08:02 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:50.250 01:08:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:50.250 01:08:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.250 01:08:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:50.250 ************************************ 00:06:50.250 START TEST raid1_resize_superblock_test 00:06:50.250 ************************************ 00:06:50.250 01:08:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:06:50.250 Process raid pid: 71478 00:06:50.250 01:08:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:50.250 01:08:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71478 00:06:50.250 01:08:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:50.250 01:08:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71478' 00:06:50.250 01:08:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71478 00:06:50.250 01:08:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71478 ']' 00:06:50.250 01:08:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.250 01:08:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.250 01:08:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.250 01:08:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.250 01:08:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.510 [2024-10-15 01:08:03.037852] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:50.510 [2024-10-15 01:08:03.038103] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.510 [2024-10-15 01:08:03.185116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.510 [2024-10-15 01:08:03.229219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.770 [2024-10-15 01:08:03.309580] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.770 [2024-10-15 01:08:03.309749] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.340 01:08:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.340 01:08:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:51.340 01:08:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:51.340 01:08:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.340 01:08:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.599 malloc0 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.599 [2024-10-15 01:08:04.077902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:51.599 [2024-10-15 01:08:04.078078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:51.599 [2024-10-15 01:08:04.078125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:51.599 [2024-10-15 01:08:04.078164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:51.599 [2024-10-15 01:08:04.080679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:51.599 [2024-10-15 01:08:04.080769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:51.599 pt0 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.599 0b5475bd-fbe0-4172-8740-f789ce22cf4d 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.599 eff0ffdc-946d-4dd4-8e62-c40748b000a1 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.599 3b457709-102d-4ead-9c08-96b86cfa00d6 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.599 [2024-10-15 01:08:04.249755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev eff0ffdc-946d-4dd4-8e62-c40748b000a1 is claimed 00:06:51.599 [2024-10-15 01:08:04.249842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3b457709-102d-4ead-9c08-96b86cfa00d6 is claimed 00:06:51.599 [2024-10-15 01:08:04.249962] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:51.599 [2024-10-15 01:08:04.249977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:51.599 [2024-10-15 01:08:04.250228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:51.599 [2024-10-15 01:08:04.250374] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:51.599 [2024-10-15 01:08:04.250384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:51.599 [2024-10-15 01:08:04.250535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:51.599 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.600 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.600 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.600 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:51.600 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:51.600 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:51.600 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.600 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.860 [2024-10-15 01:08:04.349789] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.860 [2024-10-15 01:08:04.389622] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:51.860 [2024-10-15 01:08:04.389692] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'eff0ffdc-946d-4dd4-8e62-c40748b000a1' was resized: old size 131072, new size 204800 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.860 [2024-10-15 01:08:04.401559] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:51.860 [2024-10-15 01:08:04.401617] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '3b457709-102d-4ead-9c08-96b86cfa00d6' was resized: old size 131072, new size 204800 00:06:51.860 [2024-10-15 01:08:04.401684] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:51.860 [2024-10-15 01:08:04.501523] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.860 [2024-10-15 01:08:04.525291] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:51.860 [2024-10-15 01:08:04.525396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:51.860 [2024-10-15 01:08:04.525436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:51.860 [2024-10-15 01:08:04.525590] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:51.860 [2024-10-15 01:08:04.525771] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:51.860 [2024-10-15 01:08:04.525863] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:51.860 [2024-10-15 01:08:04.525914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.860 [2024-10-15 01:08:04.537229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:51.860 [2024-10-15 01:08:04.537310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:51.860 [2024-10-15 01:08:04.537343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:51.860 [2024-10-15 01:08:04.537370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:51.860 [2024-10-15 01:08:04.539433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:51.860 [2024-10-15 01:08:04.539505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:51.860 [2024-10-15 01:08:04.540872] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev eff0ffdc-946d-4dd4-8e62-c40748b000a1 00:06:51.860 [2024-10-15 01:08:04.540981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev eff0ffdc-946d-4dd4-8e62-c40748b000a1 is claimed 00:06:51.860 [2024-10-15 01:08:04.541107] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 3b457709-102d-4ead-9c08-96b86cfa00d6 00:06:51.860 [2024-10-15 01:08:04.541170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3b457709-102d-4ead-9c08-96b86cfa00d6 is claimed 00:06:51.860 [2024-10-15 01:08:04.541319] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 3b457709-102d-4ead-9c08-96b86cfa00d6 (2) smaller than existing raid bdev Raid (3) 00:06:51.860 [2024-10-15 01:08:04.541388] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev eff0ffdc-946d-4dd4-8e62-c40748b000a1: File exists 00:06:51.860 [2024-10-15 01:08:04.541479] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:06:51.860 [2024-10-15 01:08:04.541507] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:51.860 [2024-10-15 01:08:04.541740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:06:51.860 [2024-10-15 01:08:04.541938] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:06:51.860 [2024-10-15 01:08:04.541981] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:06:51.860 [2024-10-15 01:08:04.542153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.860 pt0 00:06:51.860 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.861 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:51.861 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.861 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.861 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.861 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:51.861 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:51.861 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:51.861 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:51.861 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.861 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.861 [2024-10-15 01:08:04.565563] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.121 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.121 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.121 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:52.121 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:52.121 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71478 00:06:52.121 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71478 ']' 00:06:52.121 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71478 00:06:52.121 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:52.121 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.121 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71478 00:06:52.121 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:52.121 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:52.121 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71478' 00:06:52.121 killing process with pid 71478 00:06:52.121 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71478 00:06:52.121 [2024-10-15 01:08:04.630383] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:52.121 [2024-10-15 01:08:04.630448] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.121 [2024-10-15 01:08:04.630494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.121 [2024-10-15 01:08:04.630502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:06:52.121 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71478 00:06:52.121 [2024-10-15 01:08:04.789289] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:52.381 01:08:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:52.381 00:06:52.381 real 0m2.043s 00:06:52.381 user 0m2.137s 00:06:52.381 sys 0m0.620s 00:06:52.381 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.381 ************************************ 00:06:52.381 END TEST raid1_resize_superblock_test 00:06:52.381 ************************************ 00:06:52.381 01:08:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.381 01:08:05 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:52.381 01:08:05 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:52.381 01:08:05 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:52.381 01:08:05 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:52.381 01:08:05 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:52.381 01:08:05 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:52.381 01:08:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:52.381 01:08:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.381 01:08:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:52.381 ************************************ 00:06:52.381 START TEST raid_function_test_raid0 00:06:52.381 ************************************ 00:06:52.381 01:08:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:06:52.381 01:08:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:52.381 01:08:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:52.381 01:08:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:52.381 Process raid pid: 71553 00:06:52.381 01:08:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=71553 00:06:52.381 01:08:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:52.381 01:08:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71553' 00:06:52.381 01:08:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 71553 00:06:52.381 01:08:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 71553 ']' 00:06:52.381 01:08:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.381 01:08:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.381 01:08:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.381 01:08:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.381 01:08:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:52.641 [2024-10-15 01:08:05.171407] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:52.641 [2024-10-15 01:08:05.171533] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.641 [2024-10-15 01:08:05.315895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.641 [2024-10-15 01:08:05.342292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.900 [2024-10-15 01:08:05.385213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.900 [2024-10-15 01:08:05.385251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.469 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.469 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:06:53.469 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:53.469 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.469 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:53.469 Base_1 00:06:53.469 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.469 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:53.469 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.469 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:53.469 Base_2 00:06:53.469 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.469 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:53.469 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.469 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:53.469 [2024-10-15 01:08:06.052933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:53.469 [2024-10-15 01:08:06.054735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:53.469 [2024-10-15 01:08:06.054865] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:53.469 [2024-10-15 01:08:06.054882] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:53.469 [2024-10-15 01:08:06.055164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:53.469 [2024-10-15 01:08:06.055299] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:53.470 [2024-10-15 01:08:06.055310] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:06:53.470 [2024-10-15 01:08:06.055429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:53.470 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.470 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:53.470 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:53.470 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.470 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:53.470 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.470 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:53.470 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:53.470 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:53.470 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:53.470 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:53.470 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:53.470 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:53.470 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:53.470 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:53.470 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:53.470 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:53.470 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:53.730 [2024-10-15 01:08:06.292567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:53.730 /dev/nbd0 00:06:53.730 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:53.730 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:53.730 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:53.730 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:06:53.730 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:53.730 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:53.730 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:53.730 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:06:53.730 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:53.730 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:53.730 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:53.730 1+0 records in 00:06:53.730 1+0 records out 00:06:53.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376324 s, 10.9 MB/s 00:06:53.730 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.730 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:06:53.730 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.730 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:53.730 01:08:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:06:53.730 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.730 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:53.730 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:53.730 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:53.730 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:53.990 { 00:06:53.990 "nbd_device": "/dev/nbd0", 00:06:53.990 "bdev_name": "raid" 00:06:53.990 } 00:06:53.990 ]' 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:53.990 { 00:06:53.990 "nbd_device": "/dev/nbd0", 00:06:53.990 "bdev_name": "raid" 00:06:53.990 } 00:06:53.990 ]' 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:53.990 4096+0 records in 00:06:53.990 4096+0 records out 00:06:53.990 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0336639 s, 62.3 MB/s 00:06:53.990 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:54.251 4096+0 records in 00:06:54.251 4096+0 records out 00:06:54.251 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.181698 s, 11.5 MB/s 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:54.251 128+0 records in 00:06:54.251 128+0 records out 00:06:54.251 65536 bytes (66 kB, 64 KiB) copied, 0.00114382 s, 57.3 MB/s 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:54.251 2035+0 records in 00:06:54.251 2035+0 records out 00:06:54.251 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0153277 s, 68.0 MB/s 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:54.251 456+0 records in 00:06:54.251 456+0 records out 00:06:54.251 233472 bytes (233 kB, 228 KiB) copied, 0.00386399 s, 60.4 MB/s 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.251 01:08:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:54.512 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:54.512 [2024-10-15 01:08:07.158755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:54.512 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:54.512 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:54.512 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.512 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.512 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:54.512 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:54.512 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.512 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:54.512 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:54.512 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 71553 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 71553 ']' 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 71553 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71553 00:06:54.772 killing process with pid 71553 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71553' 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 71553 00:06:54.772 [2024-10-15 01:08:07.456231] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:54.772 [2024-10-15 01:08:07.456352] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:54.772 [2024-10-15 01:08:07.456413] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:54.772 01:08:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 71553 00:06:54.772 [2024-10-15 01:08:07.456427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:06:54.772 [2024-10-15 01:08:07.479072] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:55.032 ************************************ 00:06:55.032 END TEST raid_function_test_raid0 00:06:55.032 ************************************ 00:06:55.032 01:08:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:55.032 00:06:55.032 real 0m2.598s 00:06:55.032 user 0m3.210s 00:06:55.032 sys 0m0.892s 00:06:55.032 01:08:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.032 01:08:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:55.032 01:08:07 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:55.032 01:08:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:55.032 01:08:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.032 01:08:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:55.293 ************************************ 00:06:55.293 START TEST raid_function_test_concat 00:06:55.293 ************************************ 00:06:55.293 01:08:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:06:55.293 01:08:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:55.293 01:08:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:55.293 01:08:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:55.293 01:08:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=71668 00:06:55.293 01:08:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:55.293 01:08:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71668' 00:06:55.293 Process raid pid: 71668 00:06:55.293 01:08:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 71668 00:06:55.293 01:08:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 71668 ']' 00:06:55.293 01:08:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.293 01:08:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.293 01:08:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.293 01:08:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.293 01:08:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:55.293 [2024-10-15 01:08:07.838325] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:55.293 [2024-10-15 01:08:07.838451] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.293 [2024-10-15 01:08:07.982987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.293 [2024-10-15 01:08:08.009979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.552 [2024-10-15 01:08:08.053153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.552 [2024-10-15 01:08:08.053202] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.121 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:56.122 Base_1 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:56.122 Base_2 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:56.122 [2024-10-15 01:08:08.689119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:56.122 [2024-10-15 01:08:08.690932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:56.122 [2024-10-15 01:08:08.691024] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:56.122 [2024-10-15 01:08:08.691038] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:56.122 [2024-10-15 01:08:08.691324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:56.122 [2024-10-15 01:08:08.691454] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:56.122 [2024-10-15 01:08:08.691474] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:06:56.122 [2024-10-15 01:08:08.691613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:56.122 01:08:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:56.382 [2024-10-15 01:08:08.928730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:56.382 /dev/nbd0 00:06:56.382 01:08:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:56.382 01:08:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:56.382 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:56.382 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:06:56.382 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.382 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.382 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:56.382 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:06:56.382 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.382 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.382 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.382 1+0 records in 00:06:56.382 1+0 records out 00:06:56.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424431 s, 9.7 MB/s 00:06:56.382 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.382 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:06:56.382 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.382 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.382 01:08:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:06:56.382 01:08:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.382 01:08:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:56.382 01:08:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:56.382 01:08:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:56.382 01:08:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:56.641 { 00:06:56.641 "nbd_device": "/dev/nbd0", 00:06:56.641 "bdev_name": "raid" 00:06:56.641 } 00:06:56.641 ]' 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:56.641 { 00:06:56.641 "nbd_device": "/dev/nbd0", 00:06:56.641 "bdev_name": "raid" 00:06:56.641 } 00:06:56.641 ]' 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:56.641 4096+0 records in 00:06:56.641 4096+0 records out 00:06:56.641 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0327005 s, 64.1 MB/s 00:06:56.641 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:56.903 4096+0 records in 00:06:56.903 4096+0 records out 00:06:56.903 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.183126 s, 11.5 MB/s 00:06:56.903 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:56.903 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:56.903 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:56.903 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:56.903 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:56.903 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:56.903 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:56.903 128+0 records in 00:06:56.903 128+0 records out 00:06:56.903 65536 bytes (66 kB, 64 KiB) copied, 0.00107492 s, 61.0 MB/s 00:06:56.903 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:56.903 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:56.903 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:56.903 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:56.903 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:56.903 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:56.904 2035+0 records in 00:06:56.904 2035+0 records out 00:06:56.904 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0148496 s, 70.2 MB/s 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:56.904 456+0 records in 00:06:56.904 456+0 records out 00:06:56.904 233472 bytes (233 kB, 228 KiB) copied, 0.00313704 s, 74.4 MB/s 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.904 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:57.163 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.163 [2024-10-15 01:08:09.813550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.163 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.163 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.163 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.163 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.164 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.164 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:57.164 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.164 01:08:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:57.164 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:57.164 01:08:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:57.423 01:08:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.423 01:08:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.423 01:08:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.423 01:08:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.423 01:08:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.423 01:08:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.423 01:08:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:57.423 01:08:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.423 01:08:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.423 01:08:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:57.423 01:08:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:57.423 01:08:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 71668 00:06:57.423 01:08:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 71668 ']' 00:06:57.423 01:08:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 71668 00:06:57.423 01:08:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:06:57.423 01:08:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.423 01:08:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71668 00:06:57.423 01:08:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.423 01:08:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.423 killing process with pid 71668 00:06:57.423 01:08:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71668' 00:06:57.423 01:08:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 71668 00:06:57.423 [2024-10-15 01:08:10.118715] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:57.423 [2024-10-15 01:08:10.118855] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:57.423 [2024-10-15 01:08:10.118917] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 01:08:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 71668 00:06:57.423 ee all in destruct 00:06:57.423 [2024-10-15 01:08:10.118939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:06:57.423 [2024-10-15 01:08:10.141892] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:57.683 01:08:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:57.683 00:06:57.683 real 0m2.586s 00:06:57.683 user 0m3.226s 00:06:57.683 sys 0m0.866s 00:06:57.683 01:08:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.683 01:08:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:57.683 ************************************ 00:06:57.683 END TEST raid_function_test_concat 00:06:57.683 ************************************ 00:06:57.683 01:08:10 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:57.683 01:08:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:57.683 01:08:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.683 01:08:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:57.942 ************************************ 00:06:57.942 START TEST raid0_resize_test 00:06:57.942 ************************************ 00:06:57.942 01:08:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:06:57.942 01:08:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:57.942 01:08:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:57.942 01:08:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:57.942 01:08:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:57.942 01:08:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:57.942 01:08:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:57.942 01:08:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:57.942 01:08:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:57.942 01:08:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=71779 00:06:57.942 01:08:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:57.942 01:08:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 71779' 00:06:57.942 Process raid pid: 71779 00:06:57.942 01:08:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 71779 00:06:57.942 01:08:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 71779 ']' 00:06:57.942 01:08:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.942 01:08:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.942 01:08:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.942 01:08:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.942 01:08:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.942 [2024-10-15 01:08:10.496322] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:57.943 [2024-10-15 01:08:10.496438] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.943 [2024-10-15 01:08:10.641379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.203 [2024-10-15 01:08:10.668398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.203 [2024-10-15 01:08:10.711059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.203 [2024-10-15 01:08:10.711096] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.773 Base_1 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.773 Base_2 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.773 [2024-10-15 01:08:11.344693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:58.773 [2024-10-15 01:08:11.346451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:58.773 [2024-10-15 01:08:11.346504] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:58.773 [2024-10-15 01:08:11.346521] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:58.773 [2024-10-15 01:08:11.346796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:06:58.773 [2024-10-15 01:08:11.346921] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:58.773 [2024-10-15 01:08:11.346935] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:58.773 [2024-10-15 01:08:11.347063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.773 [2024-10-15 01:08:11.356651] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:58.773 [2024-10-15 01:08:11.356678] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:58.773 true 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.773 [2024-10-15 01:08:11.372817] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.773 [2024-10-15 01:08:11.416535] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:58.773 [2024-10-15 01:08:11.416559] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:58.773 [2024-10-15 01:08:11.416594] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:58.773 true 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.773 [2024-10-15 01:08:11.432690] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 71779 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 71779 ']' 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 71779 00:06:58.773 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:06:58.774 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.774 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71779 00:06:59.034 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.034 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.034 killing process with pid 71779 00:06:59.034 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71779' 00:06:59.034 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 71779 00:06:59.034 [2024-10-15 01:08:11.499206] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:59.034 [2024-10-15 01:08:11.499290] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:59.034 [2024-10-15 01:08:11.499348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:59.034 [2024-10-15 01:08:11.499358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:59.034 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 71779 00:06:59.034 [2024-10-15 01:08:11.500813] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:59.034 01:08:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:59.034 00:06:59.034 real 0m1.298s 00:06:59.034 user 0m1.442s 00:06:59.034 sys 0m0.302s 00:06:59.034 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.034 01:08:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.034 ************************************ 00:06:59.034 END TEST raid0_resize_test 00:06:59.034 ************************************ 00:06:59.294 01:08:11 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:59.294 01:08:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:59.294 01:08:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.294 01:08:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:59.294 ************************************ 00:06:59.294 START TEST raid1_resize_test 00:06:59.294 ************************************ 00:06:59.294 01:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:06:59.294 01:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:59.294 01:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:59.294 01:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:59.294 01:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:59.294 01:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:59.294 01:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:59.294 01:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:59.294 01:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:59.294 01:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=71830 00:06:59.294 01:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:59.294 01:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 71830' 00:06:59.294 Process raid pid: 71830 00:06:59.294 01:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 71830 00:06:59.294 01:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 71830 ']' 00:06:59.294 01:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.294 01:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.294 01:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.294 01:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.294 01:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.294 [2024-10-15 01:08:11.860591] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:06:59.294 [2024-10-15 01:08:11.860721] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.294 [2024-10-15 01:08:12.005560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.554 [2024-10-15 01:08:12.032479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.554 [2024-10-15 01:08:12.075669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.554 [2024-10-15 01:08:12.075723] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.124 Base_1 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.124 Base_2 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.124 [2024-10-15 01:08:12.705475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:00.124 [2024-10-15 01:08:12.707239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:00.124 [2024-10-15 01:08:12.707302] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:00.124 [2024-10-15 01:08:12.707318] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:00.124 [2024-10-15 01:08:12.707571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:07:00.124 [2024-10-15 01:08:12.707668] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:00.124 [2024-10-15 01:08:12.707676] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:07:00.124 [2024-10-15 01:08:12.707781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.124 [2024-10-15 01:08:12.717447] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:00.124 [2024-10-15 01:08:12.717485] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:00.124 true 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.124 [2024-10-15 01:08:12.733593] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.124 [2024-10-15 01:08:12.777319] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:00.124 [2024-10-15 01:08:12.777341] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:00.124 [2024-10-15 01:08:12.777360] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:00.124 true 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.124 [2024-10-15 01:08:12.793504] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 71830 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 71830 ']' 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 71830 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71830 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.124 killing process with pid 71830 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71830' 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 71830 00:07:00.124 [2024-10-15 01:08:12.845819] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:00.124 [2024-10-15 01:08:12.845905] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:00.124 01:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 71830 00:07:00.124 [2024-10-15 01:08:12.846324] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:00.124 [2024-10-15 01:08:12.846345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:07:00.385 [2024-10-15 01:08:12.847461] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:00.385 01:08:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:00.385 00:07:00.385 real 0m1.277s 00:07:00.385 user 0m1.424s 00:07:00.385 sys 0m0.281s 00:07:00.385 01:08:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.385 01:08:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.385 ************************************ 00:07:00.385 END TEST raid1_resize_test 00:07:00.385 ************************************ 00:07:00.645 01:08:13 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:00.645 01:08:13 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:00.645 01:08:13 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:00.645 01:08:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:00.645 01:08:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.645 01:08:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:00.645 ************************************ 00:07:00.645 START TEST raid_state_function_test 00:07:00.645 ************************************ 00:07:00.645 01:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:07:00.645 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:00.645 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:00.645 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71876 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:00.646 Process raid pid: 71876 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71876' 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71876 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 71876 ']' 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.646 01:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.646 [2024-10-15 01:08:13.215521] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:07:00.646 [2024-10-15 01:08:13.215636] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.646 [2024-10-15 01:08:13.359798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.906 [2024-10-15 01:08:13.386684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.906 [2024-10-15 01:08:13.429320] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.906 [2024-10-15 01:08:13.429372] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.478 [2024-10-15 01:08:14.039314] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:01.478 [2024-10-15 01:08:14.039361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:01.478 [2024-10-15 01:08:14.039377] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:01.478 [2024-10-15 01:08:14.039388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.478 "name": "Existed_Raid", 00:07:01.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.478 "strip_size_kb": 64, 00:07:01.478 "state": "configuring", 00:07:01.478 "raid_level": "raid0", 00:07:01.478 "superblock": false, 00:07:01.478 "num_base_bdevs": 2, 00:07:01.478 "num_base_bdevs_discovered": 0, 00:07:01.478 "num_base_bdevs_operational": 2, 00:07:01.478 "base_bdevs_list": [ 00:07:01.478 { 00:07:01.478 "name": "BaseBdev1", 00:07:01.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.478 "is_configured": false, 00:07:01.478 "data_offset": 0, 00:07:01.478 "data_size": 0 00:07:01.478 }, 00:07:01.478 { 00:07:01.478 "name": "BaseBdev2", 00:07:01.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.478 "is_configured": false, 00:07:01.478 "data_offset": 0, 00:07:01.478 "data_size": 0 00:07:01.478 } 00:07:01.478 ] 00:07:01.478 }' 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.478 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.738 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:01.738 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.738 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.738 [2024-10-15 01:08:14.451070] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:01.738 [2024-10-15 01:08:14.451113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:01.738 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.738 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:01.738 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.738 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.998 [2024-10-15 01:08:14.463056] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:01.998 [2024-10-15 01:08:14.463095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:01.998 [2024-10-15 01:08:14.463103] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:01.998 [2024-10-15 01:08:14.463122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.998 [2024-10-15 01:08:14.484003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:01.998 BaseBdev1 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.998 [ 00:07:01.998 { 00:07:01.998 "name": "BaseBdev1", 00:07:01.998 "aliases": [ 00:07:01.998 "3b77f17e-4b07-46cd-ae3e-ff0ab4e60ccb" 00:07:01.998 ], 00:07:01.998 "product_name": "Malloc disk", 00:07:01.998 "block_size": 512, 00:07:01.998 "num_blocks": 65536, 00:07:01.998 "uuid": "3b77f17e-4b07-46cd-ae3e-ff0ab4e60ccb", 00:07:01.998 "assigned_rate_limits": { 00:07:01.998 "rw_ios_per_sec": 0, 00:07:01.998 "rw_mbytes_per_sec": 0, 00:07:01.998 "r_mbytes_per_sec": 0, 00:07:01.998 "w_mbytes_per_sec": 0 00:07:01.998 }, 00:07:01.998 "claimed": true, 00:07:01.998 "claim_type": "exclusive_write", 00:07:01.998 "zoned": false, 00:07:01.998 "supported_io_types": { 00:07:01.998 "read": true, 00:07:01.998 "write": true, 00:07:01.998 "unmap": true, 00:07:01.998 "flush": true, 00:07:01.998 "reset": true, 00:07:01.998 "nvme_admin": false, 00:07:01.998 "nvme_io": false, 00:07:01.998 "nvme_io_md": false, 00:07:01.998 "write_zeroes": true, 00:07:01.998 "zcopy": true, 00:07:01.998 "get_zone_info": false, 00:07:01.998 "zone_management": false, 00:07:01.998 "zone_append": false, 00:07:01.998 "compare": false, 00:07:01.998 "compare_and_write": false, 00:07:01.998 "abort": true, 00:07:01.998 "seek_hole": false, 00:07:01.998 "seek_data": false, 00:07:01.998 "copy": true, 00:07:01.998 "nvme_iov_md": false 00:07:01.998 }, 00:07:01.998 "memory_domains": [ 00:07:01.998 { 00:07:01.998 "dma_device_id": "system", 00:07:01.998 "dma_device_type": 1 00:07:01.998 }, 00:07:01.998 { 00:07:01.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.998 "dma_device_type": 2 00:07:01.998 } 00:07:01.998 ], 00:07:01.998 "driver_specific": {} 00:07:01.998 } 00:07:01.998 ] 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.998 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.998 "name": "Existed_Raid", 00:07:01.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.998 "strip_size_kb": 64, 00:07:01.998 "state": "configuring", 00:07:01.998 "raid_level": "raid0", 00:07:01.998 "superblock": false, 00:07:01.998 "num_base_bdevs": 2, 00:07:01.998 "num_base_bdevs_discovered": 1, 00:07:01.998 "num_base_bdevs_operational": 2, 00:07:01.998 "base_bdevs_list": [ 00:07:01.999 { 00:07:01.999 "name": "BaseBdev1", 00:07:01.999 "uuid": "3b77f17e-4b07-46cd-ae3e-ff0ab4e60ccb", 00:07:01.999 "is_configured": true, 00:07:01.999 "data_offset": 0, 00:07:01.999 "data_size": 65536 00:07:01.999 }, 00:07:01.999 { 00:07:01.999 "name": "BaseBdev2", 00:07:01.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.999 "is_configured": false, 00:07:01.999 "data_offset": 0, 00:07:01.999 "data_size": 0 00:07:01.999 } 00:07:01.999 ] 00:07:01.999 }' 00:07:01.999 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.999 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.259 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:02.259 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.259 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.519 [2024-10-15 01:08:14.983174] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:02.519 [2024-10-15 01:08:14.983232] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:02.519 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.519 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:02.519 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.519 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.519 [2024-10-15 01:08:14.995214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:02.519 [2024-10-15 01:08:14.997084] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:02.519 [2024-10-15 01:08:14.997121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:02.519 01:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.519 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:02.519 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:02.519 01:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:02.519 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:02.519 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:02.519 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:02.519 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.519 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:02.519 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.519 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.519 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.519 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.519 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.519 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:02.519 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.519 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.519 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.519 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.519 "name": "Existed_Raid", 00:07:02.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.519 "strip_size_kb": 64, 00:07:02.519 "state": "configuring", 00:07:02.519 "raid_level": "raid0", 00:07:02.519 "superblock": false, 00:07:02.519 "num_base_bdevs": 2, 00:07:02.519 "num_base_bdevs_discovered": 1, 00:07:02.519 "num_base_bdevs_operational": 2, 00:07:02.519 "base_bdevs_list": [ 00:07:02.519 { 00:07:02.519 "name": "BaseBdev1", 00:07:02.519 "uuid": "3b77f17e-4b07-46cd-ae3e-ff0ab4e60ccb", 00:07:02.519 "is_configured": true, 00:07:02.519 "data_offset": 0, 00:07:02.519 "data_size": 65536 00:07:02.519 }, 00:07:02.519 { 00:07:02.519 "name": "BaseBdev2", 00:07:02.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.519 "is_configured": false, 00:07:02.519 "data_offset": 0, 00:07:02.519 "data_size": 0 00:07:02.519 } 00:07:02.519 ] 00:07:02.519 }' 00:07:02.519 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.519 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.779 [2024-10-15 01:08:15.410354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:02.779 [2024-10-15 01:08:15.410409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:02.779 [2024-10-15 01:08:15.410417] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:02.779 [2024-10-15 01:08:15.410683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:02.779 [2024-10-15 01:08:15.410808] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:02.779 [2024-10-15 01:08:15.410834] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:02.779 [2024-10-15 01:08:15.411066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.779 BaseBdev2 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.779 [ 00:07:02.779 { 00:07:02.779 "name": "BaseBdev2", 00:07:02.779 "aliases": [ 00:07:02.779 "2c29e246-72c6-4997-9cb9-d7c239920838" 00:07:02.779 ], 00:07:02.779 "product_name": "Malloc disk", 00:07:02.779 "block_size": 512, 00:07:02.779 "num_blocks": 65536, 00:07:02.779 "uuid": "2c29e246-72c6-4997-9cb9-d7c239920838", 00:07:02.779 "assigned_rate_limits": { 00:07:02.779 "rw_ios_per_sec": 0, 00:07:02.779 "rw_mbytes_per_sec": 0, 00:07:02.779 "r_mbytes_per_sec": 0, 00:07:02.779 "w_mbytes_per_sec": 0 00:07:02.779 }, 00:07:02.779 "claimed": true, 00:07:02.779 "claim_type": "exclusive_write", 00:07:02.779 "zoned": false, 00:07:02.779 "supported_io_types": { 00:07:02.779 "read": true, 00:07:02.779 "write": true, 00:07:02.779 "unmap": true, 00:07:02.779 "flush": true, 00:07:02.779 "reset": true, 00:07:02.779 "nvme_admin": false, 00:07:02.779 "nvme_io": false, 00:07:02.779 "nvme_io_md": false, 00:07:02.779 "write_zeroes": true, 00:07:02.779 "zcopy": true, 00:07:02.779 "get_zone_info": false, 00:07:02.779 "zone_management": false, 00:07:02.779 "zone_append": false, 00:07:02.779 "compare": false, 00:07:02.779 "compare_and_write": false, 00:07:02.779 "abort": true, 00:07:02.779 "seek_hole": false, 00:07:02.779 "seek_data": false, 00:07:02.779 "copy": true, 00:07:02.779 "nvme_iov_md": false 00:07:02.779 }, 00:07:02.779 "memory_domains": [ 00:07:02.779 { 00:07:02.779 "dma_device_id": "system", 00:07:02.779 "dma_device_type": 1 00:07:02.779 }, 00:07:02.779 { 00:07:02.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.779 "dma_device_type": 2 00:07:02.779 } 00:07:02.779 ], 00:07:02.779 "driver_specific": {} 00:07:02.779 } 00:07:02.779 ] 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.779 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.780 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.780 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.780 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.780 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.780 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.780 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:02.780 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.780 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.780 "name": "Existed_Raid", 00:07:02.780 "uuid": "a2eb611c-b0db-4fe6-bc2c-395c6320d7f2", 00:07:02.780 "strip_size_kb": 64, 00:07:02.780 "state": "online", 00:07:02.780 "raid_level": "raid0", 00:07:02.780 "superblock": false, 00:07:02.780 "num_base_bdevs": 2, 00:07:02.780 "num_base_bdevs_discovered": 2, 00:07:02.780 "num_base_bdevs_operational": 2, 00:07:02.780 "base_bdevs_list": [ 00:07:02.780 { 00:07:02.780 "name": "BaseBdev1", 00:07:02.780 "uuid": "3b77f17e-4b07-46cd-ae3e-ff0ab4e60ccb", 00:07:02.780 "is_configured": true, 00:07:02.780 "data_offset": 0, 00:07:02.780 "data_size": 65536 00:07:02.780 }, 00:07:02.780 { 00:07:02.780 "name": "BaseBdev2", 00:07:02.780 "uuid": "2c29e246-72c6-4997-9cb9-d7c239920838", 00:07:02.780 "is_configured": true, 00:07:02.780 "data_offset": 0, 00:07:02.780 "data_size": 65536 00:07:02.780 } 00:07:02.780 ] 00:07:02.780 }' 00:07:02.780 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.780 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.350 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:03.350 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:03.350 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:03.350 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:03.350 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:03.350 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:03.350 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:03.350 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:03.350 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.350 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.350 [2024-10-15 01:08:15.861857] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.350 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.350 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:03.350 "name": "Existed_Raid", 00:07:03.350 "aliases": [ 00:07:03.350 "a2eb611c-b0db-4fe6-bc2c-395c6320d7f2" 00:07:03.350 ], 00:07:03.350 "product_name": "Raid Volume", 00:07:03.350 "block_size": 512, 00:07:03.350 "num_blocks": 131072, 00:07:03.350 "uuid": "a2eb611c-b0db-4fe6-bc2c-395c6320d7f2", 00:07:03.350 "assigned_rate_limits": { 00:07:03.350 "rw_ios_per_sec": 0, 00:07:03.350 "rw_mbytes_per_sec": 0, 00:07:03.350 "r_mbytes_per_sec": 0, 00:07:03.350 "w_mbytes_per_sec": 0 00:07:03.350 }, 00:07:03.350 "claimed": false, 00:07:03.350 "zoned": false, 00:07:03.350 "supported_io_types": { 00:07:03.350 "read": true, 00:07:03.350 "write": true, 00:07:03.350 "unmap": true, 00:07:03.350 "flush": true, 00:07:03.350 "reset": true, 00:07:03.350 "nvme_admin": false, 00:07:03.350 "nvme_io": false, 00:07:03.350 "nvme_io_md": false, 00:07:03.350 "write_zeroes": true, 00:07:03.350 "zcopy": false, 00:07:03.350 "get_zone_info": false, 00:07:03.350 "zone_management": false, 00:07:03.350 "zone_append": false, 00:07:03.350 "compare": false, 00:07:03.350 "compare_and_write": false, 00:07:03.350 "abort": false, 00:07:03.350 "seek_hole": false, 00:07:03.350 "seek_data": false, 00:07:03.350 "copy": false, 00:07:03.350 "nvme_iov_md": false 00:07:03.350 }, 00:07:03.350 "memory_domains": [ 00:07:03.350 { 00:07:03.350 "dma_device_id": "system", 00:07:03.350 "dma_device_type": 1 00:07:03.350 }, 00:07:03.350 { 00:07:03.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.350 "dma_device_type": 2 00:07:03.350 }, 00:07:03.350 { 00:07:03.350 "dma_device_id": "system", 00:07:03.350 "dma_device_type": 1 00:07:03.350 }, 00:07:03.350 { 00:07:03.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.350 "dma_device_type": 2 00:07:03.350 } 00:07:03.350 ], 00:07:03.350 "driver_specific": { 00:07:03.350 "raid": { 00:07:03.350 "uuid": "a2eb611c-b0db-4fe6-bc2c-395c6320d7f2", 00:07:03.350 "strip_size_kb": 64, 00:07:03.350 "state": "online", 00:07:03.350 "raid_level": "raid0", 00:07:03.350 "superblock": false, 00:07:03.350 "num_base_bdevs": 2, 00:07:03.350 "num_base_bdevs_discovered": 2, 00:07:03.350 "num_base_bdevs_operational": 2, 00:07:03.350 "base_bdevs_list": [ 00:07:03.350 { 00:07:03.350 "name": "BaseBdev1", 00:07:03.350 "uuid": "3b77f17e-4b07-46cd-ae3e-ff0ab4e60ccb", 00:07:03.350 "is_configured": true, 00:07:03.350 "data_offset": 0, 00:07:03.350 "data_size": 65536 00:07:03.350 }, 00:07:03.350 { 00:07:03.350 "name": "BaseBdev2", 00:07:03.350 "uuid": "2c29e246-72c6-4997-9cb9-d7c239920838", 00:07:03.350 "is_configured": true, 00:07:03.350 "data_offset": 0, 00:07:03.350 "data_size": 65536 00:07:03.350 } 00:07:03.350 ] 00:07:03.350 } 00:07:03.350 } 00:07:03.350 }' 00:07:03.350 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:03.350 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:03.350 BaseBdev2' 00:07:03.350 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.350 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:03.350 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.350 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:03.350 01:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.350 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.350 01:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.350 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.350 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.350 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.350 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.350 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:03.350 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.350 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.350 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.350 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.350 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.350 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.350 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:03.350 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.350 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.350 [2024-10-15 01:08:16.069323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:03.350 [2024-10-15 01:08:16.069356] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:03.350 [2024-10-15 01:08:16.069405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.610 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.610 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:03.610 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:03.610 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:03.610 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:03.610 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:03.610 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:03.610 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.610 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:03.610 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:03.610 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.610 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:03.610 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.610 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.610 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.610 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.610 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.611 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.611 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.611 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.611 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.611 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.611 "name": "Existed_Raid", 00:07:03.611 "uuid": "a2eb611c-b0db-4fe6-bc2c-395c6320d7f2", 00:07:03.611 "strip_size_kb": 64, 00:07:03.611 "state": "offline", 00:07:03.611 "raid_level": "raid0", 00:07:03.611 "superblock": false, 00:07:03.611 "num_base_bdevs": 2, 00:07:03.611 "num_base_bdevs_discovered": 1, 00:07:03.611 "num_base_bdevs_operational": 1, 00:07:03.611 "base_bdevs_list": [ 00:07:03.611 { 00:07:03.611 "name": null, 00:07:03.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.611 "is_configured": false, 00:07:03.611 "data_offset": 0, 00:07:03.611 "data_size": 65536 00:07:03.611 }, 00:07:03.611 { 00:07:03.611 "name": "BaseBdev2", 00:07:03.611 "uuid": "2c29e246-72c6-4997-9cb9-d7c239920838", 00:07:03.611 "is_configured": true, 00:07:03.611 "data_offset": 0, 00:07:03.611 "data_size": 65536 00:07:03.611 } 00:07:03.611 ] 00:07:03.611 }' 00:07:03.611 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.611 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.871 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:03.871 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:03.871 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.871 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.871 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.871 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:03.871 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.871 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:03.871 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:03.871 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:03.871 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.871 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.871 [2024-10-15 01:08:16.575882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:03.871 [2024-10-15 01:08:16.575949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:03.871 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.871 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:03.871 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:03.871 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.871 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:03.871 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.871 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.131 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.131 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:04.131 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:04.131 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:04.131 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71876 00:07:04.131 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 71876 ']' 00:07:04.131 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 71876 00:07:04.131 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:04.131 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.131 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71876 00:07:04.131 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.131 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.131 killing process with pid 71876 00:07:04.131 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71876' 00:07:04.131 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 71876 00:07:04.131 [2024-10-15 01:08:16.675024] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:04.131 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 71876 00:07:04.131 [2024-10-15 01:08:16.676021] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:04.391 00:07:04.391 real 0m3.758s 00:07:04.391 user 0m5.942s 00:07:04.391 sys 0m0.737s 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.391 ************************************ 00:07:04.391 END TEST raid_state_function_test 00:07:04.391 ************************************ 00:07:04.391 01:08:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:04.391 01:08:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:04.391 01:08:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.391 01:08:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:04.391 ************************************ 00:07:04.391 START TEST raid_state_function_test_sb 00:07:04.391 ************************************ 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:04.391 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:04.392 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:04.392 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:04.392 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:04.392 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:04.392 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:04.392 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72118 00:07:04.392 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:04.392 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72118' 00:07:04.392 Process raid pid: 72118 00:07:04.392 01:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72118 00:07:04.392 01:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72118 ']' 00:07:04.392 01:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.392 01:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.392 01:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.392 01:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.392 01:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.392 [2024-10-15 01:08:17.044781] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:07:04.392 [2024-10-15 01:08:17.044897] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.651 [2024-10-15 01:08:17.189804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.651 [2024-10-15 01:08:17.216387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.651 [2024-10-15 01:08:17.258759] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.651 [2024-10-15 01:08:17.258796] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.220 [2024-10-15 01:08:17.872638] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:05.220 [2024-10-15 01:08:17.872702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:05.220 [2024-10-15 01:08:17.872717] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:05.220 [2024-10-15 01:08:17.872729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.220 "name": "Existed_Raid", 00:07:05.220 "uuid": "a0e9ecf2-965d-46c2-88d2-b7097f9627b8", 00:07:05.220 "strip_size_kb": 64, 00:07:05.220 "state": "configuring", 00:07:05.220 "raid_level": "raid0", 00:07:05.220 "superblock": true, 00:07:05.220 "num_base_bdevs": 2, 00:07:05.220 "num_base_bdevs_discovered": 0, 00:07:05.220 "num_base_bdevs_operational": 2, 00:07:05.220 "base_bdevs_list": [ 00:07:05.220 { 00:07:05.220 "name": "BaseBdev1", 00:07:05.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.220 "is_configured": false, 00:07:05.220 "data_offset": 0, 00:07:05.220 "data_size": 0 00:07:05.220 }, 00:07:05.220 { 00:07:05.220 "name": "BaseBdev2", 00:07:05.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.220 "is_configured": false, 00:07:05.220 "data_offset": 0, 00:07:05.220 "data_size": 0 00:07:05.220 } 00:07:05.220 ] 00:07:05.220 }' 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.220 01:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.787 [2024-10-15 01:08:18.315799] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:05.787 [2024-10-15 01:08:18.315849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.787 [2024-10-15 01:08:18.327785] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:05.787 [2024-10-15 01:08:18.327823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:05.787 [2024-10-15 01:08:18.327831] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:05.787 [2024-10-15 01:08:18.327851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.787 [2024-10-15 01:08:18.348638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:05.787 BaseBdev1 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.787 [ 00:07:05.787 { 00:07:05.787 "name": "BaseBdev1", 00:07:05.787 "aliases": [ 00:07:05.787 "ea01fad6-a25c-43e4-90c1-f221a4296e5a" 00:07:05.787 ], 00:07:05.787 "product_name": "Malloc disk", 00:07:05.787 "block_size": 512, 00:07:05.787 "num_blocks": 65536, 00:07:05.787 "uuid": "ea01fad6-a25c-43e4-90c1-f221a4296e5a", 00:07:05.787 "assigned_rate_limits": { 00:07:05.787 "rw_ios_per_sec": 0, 00:07:05.787 "rw_mbytes_per_sec": 0, 00:07:05.787 "r_mbytes_per_sec": 0, 00:07:05.787 "w_mbytes_per_sec": 0 00:07:05.787 }, 00:07:05.787 "claimed": true, 00:07:05.787 "claim_type": "exclusive_write", 00:07:05.787 "zoned": false, 00:07:05.787 "supported_io_types": { 00:07:05.787 "read": true, 00:07:05.787 "write": true, 00:07:05.787 "unmap": true, 00:07:05.787 "flush": true, 00:07:05.787 "reset": true, 00:07:05.787 "nvme_admin": false, 00:07:05.787 "nvme_io": false, 00:07:05.787 "nvme_io_md": false, 00:07:05.787 "write_zeroes": true, 00:07:05.787 "zcopy": true, 00:07:05.787 "get_zone_info": false, 00:07:05.787 "zone_management": false, 00:07:05.787 "zone_append": false, 00:07:05.787 "compare": false, 00:07:05.787 "compare_and_write": false, 00:07:05.787 "abort": true, 00:07:05.787 "seek_hole": false, 00:07:05.787 "seek_data": false, 00:07:05.787 "copy": true, 00:07:05.787 "nvme_iov_md": false 00:07:05.787 }, 00:07:05.787 "memory_domains": [ 00:07:05.787 { 00:07:05.787 "dma_device_id": "system", 00:07:05.787 "dma_device_type": 1 00:07:05.787 }, 00:07:05.787 { 00:07:05.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.787 "dma_device_type": 2 00:07:05.787 } 00:07:05.787 ], 00:07:05.787 "driver_specific": {} 00:07:05.787 } 00:07:05.787 ] 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.787 "name": "Existed_Raid", 00:07:05.787 "uuid": "3c87064b-e619-4056-99ee-e0187ccf11fb", 00:07:05.787 "strip_size_kb": 64, 00:07:05.787 "state": "configuring", 00:07:05.787 "raid_level": "raid0", 00:07:05.787 "superblock": true, 00:07:05.787 "num_base_bdevs": 2, 00:07:05.787 "num_base_bdevs_discovered": 1, 00:07:05.787 "num_base_bdevs_operational": 2, 00:07:05.787 "base_bdevs_list": [ 00:07:05.787 { 00:07:05.787 "name": "BaseBdev1", 00:07:05.787 "uuid": "ea01fad6-a25c-43e4-90c1-f221a4296e5a", 00:07:05.787 "is_configured": true, 00:07:05.787 "data_offset": 2048, 00:07:05.787 "data_size": 63488 00:07:05.787 }, 00:07:05.787 { 00:07:05.787 "name": "BaseBdev2", 00:07:05.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.787 "is_configured": false, 00:07:05.787 "data_offset": 0, 00:07:05.787 "data_size": 0 00:07:05.787 } 00:07:05.787 ] 00:07:05.787 }' 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.787 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.363 [2024-10-15 01:08:18.799928] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:06.363 [2024-10-15 01:08:18.799982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.363 [2024-10-15 01:08:18.811960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:06.363 [2024-10-15 01:08:18.813862] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:06.363 [2024-10-15 01:08:18.813901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.363 "name": "Existed_Raid", 00:07:06.363 "uuid": "d4b78ccf-4dad-4962-89ed-d43b4e42af53", 00:07:06.363 "strip_size_kb": 64, 00:07:06.363 "state": "configuring", 00:07:06.363 "raid_level": "raid0", 00:07:06.363 "superblock": true, 00:07:06.363 "num_base_bdevs": 2, 00:07:06.363 "num_base_bdevs_discovered": 1, 00:07:06.363 "num_base_bdevs_operational": 2, 00:07:06.363 "base_bdevs_list": [ 00:07:06.363 { 00:07:06.363 "name": "BaseBdev1", 00:07:06.363 "uuid": "ea01fad6-a25c-43e4-90c1-f221a4296e5a", 00:07:06.363 "is_configured": true, 00:07:06.363 "data_offset": 2048, 00:07:06.363 "data_size": 63488 00:07:06.363 }, 00:07:06.363 { 00:07:06.363 "name": "BaseBdev2", 00:07:06.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.363 "is_configured": false, 00:07:06.363 "data_offset": 0, 00:07:06.363 "data_size": 0 00:07:06.363 } 00:07:06.363 ] 00:07:06.363 }' 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.363 01:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.636 [2024-10-15 01:08:19.270177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:06.636 [2024-10-15 01:08:19.270371] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:06.636 [2024-10-15 01:08:19.270385] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:06.636 [2024-10-15 01:08:19.270623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:06.636 BaseBdev2 00:07:06.636 [2024-10-15 01:08:19.270770] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:06.636 [2024-10-15 01:08:19.270784] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:06.636 [2024-10-15 01:08:19.270888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.636 [ 00:07:06.636 { 00:07:06.636 "name": "BaseBdev2", 00:07:06.636 "aliases": [ 00:07:06.636 "5aa6f8c2-6efb-466d-a391-cfbd8a9c1143" 00:07:06.636 ], 00:07:06.636 "product_name": "Malloc disk", 00:07:06.636 "block_size": 512, 00:07:06.636 "num_blocks": 65536, 00:07:06.636 "uuid": "5aa6f8c2-6efb-466d-a391-cfbd8a9c1143", 00:07:06.636 "assigned_rate_limits": { 00:07:06.636 "rw_ios_per_sec": 0, 00:07:06.636 "rw_mbytes_per_sec": 0, 00:07:06.636 "r_mbytes_per_sec": 0, 00:07:06.636 "w_mbytes_per_sec": 0 00:07:06.636 }, 00:07:06.636 "claimed": true, 00:07:06.636 "claim_type": "exclusive_write", 00:07:06.636 "zoned": false, 00:07:06.636 "supported_io_types": { 00:07:06.636 "read": true, 00:07:06.636 "write": true, 00:07:06.636 "unmap": true, 00:07:06.636 "flush": true, 00:07:06.636 "reset": true, 00:07:06.636 "nvme_admin": false, 00:07:06.636 "nvme_io": false, 00:07:06.636 "nvme_io_md": false, 00:07:06.636 "write_zeroes": true, 00:07:06.636 "zcopy": true, 00:07:06.636 "get_zone_info": false, 00:07:06.636 "zone_management": false, 00:07:06.636 "zone_append": false, 00:07:06.636 "compare": false, 00:07:06.636 "compare_and_write": false, 00:07:06.636 "abort": true, 00:07:06.636 "seek_hole": false, 00:07:06.636 "seek_data": false, 00:07:06.636 "copy": true, 00:07:06.636 "nvme_iov_md": false 00:07:06.636 }, 00:07:06.636 "memory_domains": [ 00:07:06.636 { 00:07:06.636 "dma_device_id": "system", 00:07:06.636 "dma_device_type": 1 00:07:06.636 }, 00:07:06.636 { 00:07:06.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.636 "dma_device_type": 2 00:07:06.636 } 00:07:06.636 ], 00:07:06.636 "driver_specific": {} 00:07:06.636 } 00:07:06.636 ] 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:06.636 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.895 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.895 "name": "Existed_Raid", 00:07:06.895 "uuid": "d4b78ccf-4dad-4962-89ed-d43b4e42af53", 00:07:06.895 "strip_size_kb": 64, 00:07:06.895 "state": "online", 00:07:06.895 "raid_level": "raid0", 00:07:06.895 "superblock": true, 00:07:06.895 "num_base_bdevs": 2, 00:07:06.895 "num_base_bdevs_discovered": 2, 00:07:06.896 "num_base_bdevs_operational": 2, 00:07:06.896 "base_bdevs_list": [ 00:07:06.896 { 00:07:06.896 "name": "BaseBdev1", 00:07:06.896 "uuid": "ea01fad6-a25c-43e4-90c1-f221a4296e5a", 00:07:06.896 "is_configured": true, 00:07:06.896 "data_offset": 2048, 00:07:06.896 "data_size": 63488 00:07:06.896 }, 00:07:06.896 { 00:07:06.896 "name": "BaseBdev2", 00:07:06.896 "uuid": "5aa6f8c2-6efb-466d-a391-cfbd8a9c1143", 00:07:06.896 "is_configured": true, 00:07:06.896 "data_offset": 2048, 00:07:06.896 "data_size": 63488 00:07:06.896 } 00:07:06.896 ] 00:07:06.896 }' 00:07:06.896 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.896 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.155 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:07.155 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:07.155 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:07.155 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:07.155 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:07.155 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:07.155 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:07.155 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.155 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.155 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:07.155 [2024-10-15 01:08:19.761619] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.155 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.155 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:07.155 "name": "Existed_Raid", 00:07:07.155 "aliases": [ 00:07:07.155 "d4b78ccf-4dad-4962-89ed-d43b4e42af53" 00:07:07.155 ], 00:07:07.155 "product_name": "Raid Volume", 00:07:07.155 "block_size": 512, 00:07:07.155 "num_blocks": 126976, 00:07:07.155 "uuid": "d4b78ccf-4dad-4962-89ed-d43b4e42af53", 00:07:07.155 "assigned_rate_limits": { 00:07:07.155 "rw_ios_per_sec": 0, 00:07:07.155 "rw_mbytes_per_sec": 0, 00:07:07.155 "r_mbytes_per_sec": 0, 00:07:07.155 "w_mbytes_per_sec": 0 00:07:07.155 }, 00:07:07.155 "claimed": false, 00:07:07.155 "zoned": false, 00:07:07.155 "supported_io_types": { 00:07:07.155 "read": true, 00:07:07.155 "write": true, 00:07:07.155 "unmap": true, 00:07:07.155 "flush": true, 00:07:07.155 "reset": true, 00:07:07.155 "nvme_admin": false, 00:07:07.155 "nvme_io": false, 00:07:07.155 "nvme_io_md": false, 00:07:07.155 "write_zeroes": true, 00:07:07.155 "zcopy": false, 00:07:07.155 "get_zone_info": false, 00:07:07.155 "zone_management": false, 00:07:07.155 "zone_append": false, 00:07:07.155 "compare": false, 00:07:07.155 "compare_and_write": false, 00:07:07.155 "abort": false, 00:07:07.155 "seek_hole": false, 00:07:07.155 "seek_data": false, 00:07:07.155 "copy": false, 00:07:07.155 "nvme_iov_md": false 00:07:07.155 }, 00:07:07.155 "memory_domains": [ 00:07:07.155 { 00:07:07.155 "dma_device_id": "system", 00:07:07.155 "dma_device_type": 1 00:07:07.155 }, 00:07:07.155 { 00:07:07.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.155 "dma_device_type": 2 00:07:07.155 }, 00:07:07.155 { 00:07:07.155 "dma_device_id": "system", 00:07:07.155 "dma_device_type": 1 00:07:07.155 }, 00:07:07.155 { 00:07:07.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.155 "dma_device_type": 2 00:07:07.155 } 00:07:07.155 ], 00:07:07.155 "driver_specific": { 00:07:07.155 "raid": { 00:07:07.155 "uuid": "d4b78ccf-4dad-4962-89ed-d43b4e42af53", 00:07:07.155 "strip_size_kb": 64, 00:07:07.155 "state": "online", 00:07:07.155 "raid_level": "raid0", 00:07:07.155 "superblock": true, 00:07:07.155 "num_base_bdevs": 2, 00:07:07.155 "num_base_bdevs_discovered": 2, 00:07:07.155 "num_base_bdevs_operational": 2, 00:07:07.155 "base_bdevs_list": [ 00:07:07.155 { 00:07:07.155 "name": "BaseBdev1", 00:07:07.155 "uuid": "ea01fad6-a25c-43e4-90c1-f221a4296e5a", 00:07:07.155 "is_configured": true, 00:07:07.155 "data_offset": 2048, 00:07:07.155 "data_size": 63488 00:07:07.155 }, 00:07:07.155 { 00:07:07.155 "name": "BaseBdev2", 00:07:07.155 "uuid": "5aa6f8c2-6efb-466d-a391-cfbd8a9c1143", 00:07:07.155 "is_configured": true, 00:07:07.155 "data_offset": 2048, 00:07:07.155 "data_size": 63488 00:07:07.155 } 00:07:07.155 ] 00:07:07.155 } 00:07:07.155 } 00:07:07.155 }' 00:07:07.155 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:07.155 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:07.155 BaseBdev2' 00:07:07.155 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:07.155 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:07.155 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:07.155 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:07.155 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:07.155 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.155 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.415 [2024-10-15 01:08:19.973073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:07.415 [2024-10-15 01:08:19.973101] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:07.415 [2024-10-15 01:08:19.973152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.415 01:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.415 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.415 01:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.415 "name": "Existed_Raid", 00:07:07.415 "uuid": "d4b78ccf-4dad-4962-89ed-d43b4e42af53", 00:07:07.415 "strip_size_kb": 64, 00:07:07.415 "state": "offline", 00:07:07.415 "raid_level": "raid0", 00:07:07.415 "superblock": true, 00:07:07.415 "num_base_bdevs": 2, 00:07:07.415 "num_base_bdevs_discovered": 1, 00:07:07.415 "num_base_bdevs_operational": 1, 00:07:07.415 "base_bdevs_list": [ 00:07:07.415 { 00:07:07.415 "name": null, 00:07:07.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.416 "is_configured": false, 00:07:07.416 "data_offset": 0, 00:07:07.416 "data_size": 63488 00:07:07.416 }, 00:07:07.416 { 00:07:07.416 "name": "BaseBdev2", 00:07:07.416 "uuid": "5aa6f8c2-6efb-466d-a391-cfbd8a9c1143", 00:07:07.416 "is_configured": true, 00:07:07.416 "data_offset": 2048, 00:07:07.416 "data_size": 63488 00:07:07.416 } 00:07:07.416 ] 00:07:07.416 }' 00:07:07.416 01:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.416 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.675 01:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:07.675 01:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:07.675 01:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.675 01:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:07.675 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.675 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.675 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.936 [2024-10-15 01:08:20.427539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:07.936 [2024-10-15 01:08:20.427636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72118 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72118 ']' 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72118 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72118 00:07:07.936 killing process with pid 72118 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72118' 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72118 00:07:07.936 [2024-10-15 01:08:20.521837] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:07.936 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72118 00:07:07.936 [2024-10-15 01:08:20.522827] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.198 01:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:08.198 00:07:08.198 real 0m3.777s 00:07:08.198 user 0m5.982s 00:07:08.198 sys 0m0.719s 00:07:08.198 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.198 01:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.198 ************************************ 00:07:08.198 END TEST raid_state_function_test_sb 00:07:08.198 ************************************ 00:07:08.198 01:08:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:08.198 01:08:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:08.198 01:08:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.198 01:08:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.198 ************************************ 00:07:08.198 START TEST raid_superblock_test 00:07:08.198 ************************************ 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72353 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72353 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72353 ']' 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.198 01:08:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.198 [2024-10-15 01:08:20.887582] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:07:08.198 [2024-10-15 01:08:20.887788] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72353 ] 00:07:08.459 [2024-10-15 01:08:21.033529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.459 [2024-10-15 01:08:21.060962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.459 [2024-10-15 01:08:21.103714] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.459 [2024-10-15 01:08:21.103827] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.027 malloc1 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.027 [2024-10-15 01:08:21.730263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:09.027 [2024-10-15 01:08:21.730319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.027 [2024-10-15 01:08:21.730339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:09.027 [2024-10-15 01:08:21.730350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.027 [2024-10-15 01:08:21.732489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.027 [2024-10-15 01:08:21.732528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:09.027 pt1 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.027 01:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.286 malloc2 00:07:09.286 01:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.286 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:09.286 01:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.286 01:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.286 [2024-10-15 01:08:21.759287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:09.286 [2024-10-15 01:08:21.759378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.286 [2024-10-15 01:08:21.759421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:09.286 [2024-10-15 01:08:21.759456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.287 [2024-10-15 01:08:21.761538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.287 [2024-10-15 01:08:21.761620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:09.287 pt2 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.287 [2024-10-15 01:08:21.771297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:09.287 [2024-10-15 01:08:21.773165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:09.287 [2024-10-15 01:08:21.773378] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:09.287 [2024-10-15 01:08:21.773426] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:09.287 [2024-10-15 01:08:21.773686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:09.287 [2024-10-15 01:08:21.773849] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:09.287 [2024-10-15 01:08:21.773897] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:09.287 [2024-10-15 01:08:21.774064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.287 "name": "raid_bdev1", 00:07:09.287 "uuid": "336888a4-8206-432a-803e-ee87348d5c5a", 00:07:09.287 "strip_size_kb": 64, 00:07:09.287 "state": "online", 00:07:09.287 "raid_level": "raid0", 00:07:09.287 "superblock": true, 00:07:09.287 "num_base_bdevs": 2, 00:07:09.287 "num_base_bdevs_discovered": 2, 00:07:09.287 "num_base_bdevs_operational": 2, 00:07:09.287 "base_bdevs_list": [ 00:07:09.287 { 00:07:09.287 "name": "pt1", 00:07:09.287 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:09.287 "is_configured": true, 00:07:09.287 "data_offset": 2048, 00:07:09.287 "data_size": 63488 00:07:09.287 }, 00:07:09.287 { 00:07:09.287 "name": "pt2", 00:07:09.287 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:09.287 "is_configured": true, 00:07:09.287 "data_offset": 2048, 00:07:09.287 "data_size": 63488 00:07:09.287 } 00:07:09.287 ] 00:07:09.287 }' 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.287 01:08:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.545 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:09.546 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:09.546 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:09.546 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:09.546 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:09.546 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:09.546 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:09.546 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.546 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.546 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:09.546 [2024-10-15 01:08:22.202813] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.546 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.546 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:09.546 "name": "raid_bdev1", 00:07:09.546 "aliases": [ 00:07:09.546 "336888a4-8206-432a-803e-ee87348d5c5a" 00:07:09.546 ], 00:07:09.546 "product_name": "Raid Volume", 00:07:09.546 "block_size": 512, 00:07:09.546 "num_blocks": 126976, 00:07:09.546 "uuid": "336888a4-8206-432a-803e-ee87348d5c5a", 00:07:09.546 "assigned_rate_limits": { 00:07:09.546 "rw_ios_per_sec": 0, 00:07:09.546 "rw_mbytes_per_sec": 0, 00:07:09.546 "r_mbytes_per_sec": 0, 00:07:09.546 "w_mbytes_per_sec": 0 00:07:09.546 }, 00:07:09.546 "claimed": false, 00:07:09.546 "zoned": false, 00:07:09.546 "supported_io_types": { 00:07:09.546 "read": true, 00:07:09.546 "write": true, 00:07:09.546 "unmap": true, 00:07:09.546 "flush": true, 00:07:09.546 "reset": true, 00:07:09.546 "nvme_admin": false, 00:07:09.546 "nvme_io": false, 00:07:09.546 "nvme_io_md": false, 00:07:09.546 "write_zeroes": true, 00:07:09.546 "zcopy": false, 00:07:09.546 "get_zone_info": false, 00:07:09.546 "zone_management": false, 00:07:09.546 "zone_append": false, 00:07:09.546 "compare": false, 00:07:09.546 "compare_and_write": false, 00:07:09.546 "abort": false, 00:07:09.546 "seek_hole": false, 00:07:09.546 "seek_data": false, 00:07:09.546 "copy": false, 00:07:09.546 "nvme_iov_md": false 00:07:09.546 }, 00:07:09.546 "memory_domains": [ 00:07:09.546 { 00:07:09.546 "dma_device_id": "system", 00:07:09.546 "dma_device_type": 1 00:07:09.546 }, 00:07:09.546 { 00:07:09.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.546 "dma_device_type": 2 00:07:09.546 }, 00:07:09.546 { 00:07:09.546 "dma_device_id": "system", 00:07:09.546 "dma_device_type": 1 00:07:09.546 }, 00:07:09.546 { 00:07:09.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.546 "dma_device_type": 2 00:07:09.546 } 00:07:09.546 ], 00:07:09.546 "driver_specific": { 00:07:09.546 "raid": { 00:07:09.546 "uuid": "336888a4-8206-432a-803e-ee87348d5c5a", 00:07:09.546 "strip_size_kb": 64, 00:07:09.546 "state": "online", 00:07:09.546 "raid_level": "raid0", 00:07:09.546 "superblock": true, 00:07:09.546 "num_base_bdevs": 2, 00:07:09.546 "num_base_bdevs_discovered": 2, 00:07:09.546 "num_base_bdevs_operational": 2, 00:07:09.546 "base_bdevs_list": [ 00:07:09.546 { 00:07:09.546 "name": "pt1", 00:07:09.546 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:09.546 "is_configured": true, 00:07:09.546 "data_offset": 2048, 00:07:09.546 "data_size": 63488 00:07:09.546 }, 00:07:09.546 { 00:07:09.546 "name": "pt2", 00:07:09.546 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:09.546 "is_configured": true, 00:07:09.546 "data_offset": 2048, 00:07:09.546 "data_size": 63488 00:07:09.546 } 00:07:09.546 ] 00:07:09.546 } 00:07:09.546 } 00:07:09.546 }' 00:07:09.546 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:09.804 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:09.804 pt2' 00:07:09.804 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.804 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:09.804 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:09.804 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.804 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:09.804 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.804 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.804 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.804 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:09.804 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:09.804 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:09.804 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:09.804 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.804 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.804 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.804 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.804 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:09.804 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.805 [2024-10-15 01:08:22.406401] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=336888a4-8206-432a-803e-ee87348d5c5a 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 336888a4-8206-432a-803e-ee87348d5c5a ']' 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.805 [2024-10-15 01:08:22.450069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:09.805 [2024-10-15 01:08:22.450095] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:09.805 [2024-10-15 01:08:22.450170] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:09.805 [2024-10-15 01:08:22.450237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:09.805 [2024-10-15 01:08:22.450247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.805 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.064 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.064 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:10.064 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.064 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.064 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:10.064 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.064 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:10.064 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:10.064 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:10.064 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:10.064 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:10.064 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.064 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:10.064 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.064 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:10.064 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.064 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.064 [2024-10-15 01:08:22.585850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:10.064 [2024-10-15 01:08:22.587693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:10.065 [2024-10-15 01:08:22.587759] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:10.065 [2024-10-15 01:08:22.587802] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:10.065 [2024-10-15 01:08:22.587819] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:10.065 [2024-10-15 01:08:22.587828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:10.065 request: 00:07:10.065 { 00:07:10.065 "name": "raid_bdev1", 00:07:10.065 "raid_level": "raid0", 00:07:10.065 "base_bdevs": [ 00:07:10.065 "malloc1", 00:07:10.065 "malloc2" 00:07:10.065 ], 00:07:10.065 "strip_size_kb": 64, 00:07:10.065 "superblock": false, 00:07:10.065 "method": "bdev_raid_create", 00:07:10.065 "req_id": 1 00:07:10.065 } 00:07:10.065 Got JSON-RPC error response 00:07:10.065 response: 00:07:10.065 { 00:07:10.065 "code": -17, 00:07:10.065 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:10.065 } 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.065 [2024-10-15 01:08:22.633732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:10.065 [2024-10-15 01:08:22.633818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.065 [2024-10-15 01:08:22.633851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:10.065 [2024-10-15 01:08:22.633877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.065 [2024-10-15 01:08:22.635960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.065 [2024-10-15 01:08:22.636025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:10.065 [2024-10-15 01:08:22.636126] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:10.065 [2024-10-15 01:08:22.636174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:10.065 pt1 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.065 "name": "raid_bdev1", 00:07:10.065 "uuid": "336888a4-8206-432a-803e-ee87348d5c5a", 00:07:10.065 "strip_size_kb": 64, 00:07:10.065 "state": "configuring", 00:07:10.065 "raid_level": "raid0", 00:07:10.065 "superblock": true, 00:07:10.065 "num_base_bdevs": 2, 00:07:10.065 "num_base_bdevs_discovered": 1, 00:07:10.065 "num_base_bdevs_operational": 2, 00:07:10.065 "base_bdevs_list": [ 00:07:10.065 { 00:07:10.065 "name": "pt1", 00:07:10.065 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:10.065 "is_configured": true, 00:07:10.065 "data_offset": 2048, 00:07:10.065 "data_size": 63488 00:07:10.065 }, 00:07:10.065 { 00:07:10.065 "name": null, 00:07:10.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:10.065 "is_configured": false, 00:07:10.065 "data_offset": 2048, 00:07:10.065 "data_size": 63488 00:07:10.065 } 00:07:10.065 ] 00:07:10.065 }' 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.065 01:08:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.633 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:10.633 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:10.633 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:10.633 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:10.633 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.633 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.633 [2024-10-15 01:08:23.069038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:10.633 [2024-10-15 01:08:23.069102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.633 [2024-10-15 01:08:23.069123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:10.633 [2024-10-15 01:08:23.069133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.633 [2024-10-15 01:08:23.069525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.633 [2024-10-15 01:08:23.069543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:10.634 [2024-10-15 01:08:23.069613] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:10.634 [2024-10-15 01:08:23.069634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:10.634 [2024-10-15 01:08:23.069718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:10.634 [2024-10-15 01:08:23.069726] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:10.634 [2024-10-15 01:08:23.069972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:10.634 [2024-10-15 01:08:23.070090] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:10.634 [2024-10-15 01:08:23.070105] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:10.634 [2024-10-15 01:08:23.070223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.634 pt2 00:07:10.634 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.634 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:10.634 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:10.634 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:10.634 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:10.634 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:10.634 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.634 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.634 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.634 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.634 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.634 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.634 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.634 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.634 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.634 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.634 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:10.634 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.634 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.634 "name": "raid_bdev1", 00:07:10.634 "uuid": "336888a4-8206-432a-803e-ee87348d5c5a", 00:07:10.634 "strip_size_kb": 64, 00:07:10.634 "state": "online", 00:07:10.634 "raid_level": "raid0", 00:07:10.634 "superblock": true, 00:07:10.634 "num_base_bdevs": 2, 00:07:10.634 "num_base_bdevs_discovered": 2, 00:07:10.634 "num_base_bdevs_operational": 2, 00:07:10.634 "base_bdevs_list": [ 00:07:10.634 { 00:07:10.634 "name": "pt1", 00:07:10.634 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:10.634 "is_configured": true, 00:07:10.634 "data_offset": 2048, 00:07:10.634 "data_size": 63488 00:07:10.634 }, 00:07:10.634 { 00:07:10.634 "name": "pt2", 00:07:10.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:10.634 "is_configured": true, 00:07:10.634 "data_offset": 2048, 00:07:10.634 "data_size": 63488 00:07:10.634 } 00:07:10.634 ] 00:07:10.634 }' 00:07:10.634 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.634 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.893 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:10.893 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:10.893 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:10.893 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:10.893 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:10.893 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:10.893 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:10.893 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:10.893 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.893 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.893 [2024-10-15 01:08:23.472592] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.893 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.893 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:10.893 "name": "raid_bdev1", 00:07:10.893 "aliases": [ 00:07:10.893 "336888a4-8206-432a-803e-ee87348d5c5a" 00:07:10.893 ], 00:07:10.893 "product_name": "Raid Volume", 00:07:10.893 "block_size": 512, 00:07:10.893 "num_blocks": 126976, 00:07:10.893 "uuid": "336888a4-8206-432a-803e-ee87348d5c5a", 00:07:10.893 "assigned_rate_limits": { 00:07:10.893 "rw_ios_per_sec": 0, 00:07:10.893 "rw_mbytes_per_sec": 0, 00:07:10.893 "r_mbytes_per_sec": 0, 00:07:10.893 "w_mbytes_per_sec": 0 00:07:10.893 }, 00:07:10.893 "claimed": false, 00:07:10.893 "zoned": false, 00:07:10.893 "supported_io_types": { 00:07:10.893 "read": true, 00:07:10.893 "write": true, 00:07:10.893 "unmap": true, 00:07:10.893 "flush": true, 00:07:10.893 "reset": true, 00:07:10.893 "nvme_admin": false, 00:07:10.893 "nvme_io": false, 00:07:10.893 "nvme_io_md": false, 00:07:10.893 "write_zeroes": true, 00:07:10.893 "zcopy": false, 00:07:10.893 "get_zone_info": false, 00:07:10.893 "zone_management": false, 00:07:10.893 "zone_append": false, 00:07:10.893 "compare": false, 00:07:10.893 "compare_and_write": false, 00:07:10.893 "abort": false, 00:07:10.893 "seek_hole": false, 00:07:10.893 "seek_data": false, 00:07:10.893 "copy": false, 00:07:10.893 "nvme_iov_md": false 00:07:10.893 }, 00:07:10.893 "memory_domains": [ 00:07:10.893 { 00:07:10.893 "dma_device_id": "system", 00:07:10.893 "dma_device_type": 1 00:07:10.893 }, 00:07:10.893 { 00:07:10.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.893 "dma_device_type": 2 00:07:10.893 }, 00:07:10.893 { 00:07:10.893 "dma_device_id": "system", 00:07:10.893 "dma_device_type": 1 00:07:10.893 }, 00:07:10.893 { 00:07:10.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.894 "dma_device_type": 2 00:07:10.894 } 00:07:10.894 ], 00:07:10.894 "driver_specific": { 00:07:10.894 "raid": { 00:07:10.894 "uuid": "336888a4-8206-432a-803e-ee87348d5c5a", 00:07:10.894 "strip_size_kb": 64, 00:07:10.894 "state": "online", 00:07:10.894 "raid_level": "raid0", 00:07:10.894 "superblock": true, 00:07:10.894 "num_base_bdevs": 2, 00:07:10.894 "num_base_bdevs_discovered": 2, 00:07:10.894 "num_base_bdevs_operational": 2, 00:07:10.894 "base_bdevs_list": [ 00:07:10.894 { 00:07:10.894 "name": "pt1", 00:07:10.894 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:10.894 "is_configured": true, 00:07:10.894 "data_offset": 2048, 00:07:10.894 "data_size": 63488 00:07:10.894 }, 00:07:10.894 { 00:07:10.894 "name": "pt2", 00:07:10.894 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:10.894 "is_configured": true, 00:07:10.894 "data_offset": 2048, 00:07:10.894 "data_size": 63488 00:07:10.894 } 00:07:10.894 ] 00:07:10.894 } 00:07:10.894 } 00:07:10.894 }' 00:07:10.894 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:10.894 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:10.894 pt2' 00:07:10.894 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.894 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:10.894 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:10.894 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:10.894 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.894 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.894 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.894 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.153 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:11.153 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.154 [2024-10-15 01:08:23.700189] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 336888a4-8206-432a-803e-ee87348d5c5a '!=' 336888a4-8206-432a-803e-ee87348d5c5a ']' 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72353 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72353 ']' 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72353 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72353 00:07:11.154 killing process with pid 72353 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72353' 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72353 00:07:11.154 [2024-10-15 01:08:23.766722] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.154 [2024-10-15 01:08:23.766794] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.154 [2024-10-15 01:08:23.766842] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:11.154 [2024-10-15 01:08:23.766851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:11.154 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72353 00:07:11.154 [2024-10-15 01:08:23.789618] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.414 ************************************ 00:07:11.414 END TEST raid_superblock_test 00:07:11.414 01:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:11.414 00:07:11.414 real 0m3.198s 00:07:11.414 user 0m4.968s 00:07:11.414 sys 0m0.663s 00:07:11.414 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.414 01:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.414 ************************************ 00:07:11.414 01:08:24 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:11.414 01:08:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:11.414 01:08:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.414 01:08:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:11.414 ************************************ 00:07:11.414 START TEST raid_read_error_test 00:07:11.414 ************************************ 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.052ap5l0La 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72554 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:11.414 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72554 00:07:11.415 01:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72554 ']' 00:07:11.415 01:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.415 01:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.415 01:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.415 01:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.415 01:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.674 [2024-10-15 01:08:24.165716] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:07:11.674 [2024-10-15 01:08:24.165934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72554 ] 00:07:11.674 [2024-10-15 01:08:24.310027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.674 [2024-10-15 01:08:24.336343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.674 [2024-10-15 01:08:24.378542] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.674 [2024-10-15 01:08:24.378662] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.611 01:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.611 01:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:12.611 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:12.611 01:08:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:12.611 01:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.612 01:08:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.612 BaseBdev1_malloc 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.612 true 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.612 [2024-10-15 01:08:25.013374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:12.612 [2024-10-15 01:08:25.013423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.612 [2024-10-15 01:08:25.013444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:12.612 [2024-10-15 01:08:25.013453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.612 [2024-10-15 01:08:25.015622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.612 [2024-10-15 01:08:25.015711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:12.612 BaseBdev1 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.612 BaseBdev2_malloc 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.612 true 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.612 [2024-10-15 01:08:25.053739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:12.612 [2024-10-15 01:08:25.053784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.612 [2024-10-15 01:08:25.053816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:12.612 [2024-10-15 01:08:25.053832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.612 [2024-10-15 01:08:25.055894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.612 [2024-10-15 01:08:25.055928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:12.612 BaseBdev2 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.612 [2024-10-15 01:08:25.065791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:12.612 [2024-10-15 01:08:25.067580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:12.612 [2024-10-15 01:08:25.067752] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:12.612 [2024-10-15 01:08:25.067765] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:12.612 [2024-10-15 01:08:25.068024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:12.612 [2024-10-15 01:08:25.068141] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:12.612 [2024-10-15 01:08:25.068157] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:12.612 [2024-10-15 01:08:25.068322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.612 "name": "raid_bdev1", 00:07:12.612 "uuid": "0ab83456-0bdc-49c3-ad79-c33dc3c2f399", 00:07:12.612 "strip_size_kb": 64, 00:07:12.612 "state": "online", 00:07:12.612 "raid_level": "raid0", 00:07:12.612 "superblock": true, 00:07:12.612 "num_base_bdevs": 2, 00:07:12.612 "num_base_bdevs_discovered": 2, 00:07:12.612 "num_base_bdevs_operational": 2, 00:07:12.612 "base_bdevs_list": [ 00:07:12.612 { 00:07:12.612 "name": "BaseBdev1", 00:07:12.612 "uuid": "8a2d049d-4c32-582e-b31e-c54d494bc40a", 00:07:12.612 "is_configured": true, 00:07:12.612 "data_offset": 2048, 00:07:12.612 "data_size": 63488 00:07:12.612 }, 00:07:12.612 { 00:07:12.612 "name": "BaseBdev2", 00:07:12.612 "uuid": "6e2f82b2-de53-589c-a48e-60dcb44a7933", 00:07:12.612 "is_configured": true, 00:07:12.612 "data_offset": 2048, 00:07:12.612 "data_size": 63488 00:07:12.612 } 00:07:12.612 ] 00:07:12.612 }' 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.612 01:08:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.871 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:12.871 01:08:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:13.130 [2024-10-15 01:08:25.605264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.081 "name": "raid_bdev1", 00:07:14.081 "uuid": "0ab83456-0bdc-49c3-ad79-c33dc3c2f399", 00:07:14.081 "strip_size_kb": 64, 00:07:14.081 "state": "online", 00:07:14.081 "raid_level": "raid0", 00:07:14.081 "superblock": true, 00:07:14.081 "num_base_bdevs": 2, 00:07:14.081 "num_base_bdevs_discovered": 2, 00:07:14.081 "num_base_bdevs_operational": 2, 00:07:14.081 "base_bdevs_list": [ 00:07:14.081 { 00:07:14.081 "name": "BaseBdev1", 00:07:14.081 "uuid": "8a2d049d-4c32-582e-b31e-c54d494bc40a", 00:07:14.081 "is_configured": true, 00:07:14.081 "data_offset": 2048, 00:07:14.081 "data_size": 63488 00:07:14.081 }, 00:07:14.081 { 00:07:14.081 "name": "BaseBdev2", 00:07:14.081 "uuid": "6e2f82b2-de53-589c-a48e-60dcb44a7933", 00:07:14.081 "is_configured": true, 00:07:14.081 "data_offset": 2048, 00:07:14.081 "data_size": 63488 00:07:14.081 } 00:07:14.081 ] 00:07:14.081 }' 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.081 01:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.341 01:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:14.341 01:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.341 01:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.341 [2024-10-15 01:08:26.940480] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:14.341 [2024-10-15 01:08:26.940511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:14.341 [2024-10-15 01:08:26.942954] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:14.341 [2024-10-15 01:08:26.942994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.341 [2024-10-15 01:08:26.943025] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:14.341 [2024-10-15 01:08:26.943034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:14.341 { 00:07:14.341 "results": [ 00:07:14.341 { 00:07:14.341 "job": "raid_bdev1", 00:07:14.341 "core_mask": "0x1", 00:07:14.341 "workload": "randrw", 00:07:14.341 "percentage": 50, 00:07:14.341 "status": "finished", 00:07:14.341 "queue_depth": 1, 00:07:14.341 "io_size": 131072, 00:07:14.341 "runtime": 1.336001, 00:07:14.341 "iops": 17569.597627546686, 00:07:14.341 "mibps": 2196.1997034433357, 00:07:14.341 "io_failed": 1, 00:07:14.341 "io_timeout": 0, 00:07:14.341 "avg_latency_us": 78.73950902847822, 00:07:14.341 "min_latency_us": 24.593886462882097, 00:07:14.341 "max_latency_us": 1402.2986899563318 00:07:14.341 } 00:07:14.341 ], 00:07:14.341 "core_count": 1 00:07:14.341 } 00:07:14.341 01:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.341 01:08:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72554 00:07:14.341 01:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72554 ']' 00:07:14.341 01:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72554 00:07:14.341 01:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:14.341 01:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.341 01:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72554 00:07:14.341 killing process with pid 72554 00:07:14.341 01:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.341 01:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.341 01:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72554' 00:07:14.341 01:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72554 00:07:14.341 [2024-10-15 01:08:26.984391] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:14.341 01:08:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72554 00:07:14.341 [2024-10-15 01:08:27.000069] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:14.601 01:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:14.601 01:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.052ap5l0La 00:07:14.601 01:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:14.601 01:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:14.601 01:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:14.601 01:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:14.601 01:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:14.601 01:08:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:14.601 00:07:14.601 real 0m3.141s 00:07:14.601 user 0m4.012s 00:07:14.601 sys 0m0.475s 00:07:14.601 ************************************ 00:07:14.601 END TEST raid_read_error_test 00:07:14.601 ************************************ 00:07:14.601 01:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.601 01:08:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.601 01:08:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:14.601 01:08:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:14.601 01:08:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.601 01:08:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:14.601 ************************************ 00:07:14.601 START TEST raid_write_error_test 00:07:14.602 ************************************ 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LpQT104yrd 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72683 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72683 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 72683 ']' 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.602 01:08:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.861 [2024-10-15 01:08:27.382868] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:07:14.861 [2024-10-15 01:08:27.383099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72683 ] 00:07:14.861 [2024-10-15 01:08:27.526419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.862 [2024-10-15 01:08:27.552397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.121 [2024-10-15 01:08:27.595002] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.121 [2024-10-15 01:08:27.595142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.698 BaseBdev1_malloc 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.698 true 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.698 [2024-10-15 01:08:28.233767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:15.698 [2024-10-15 01:08:28.233869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.698 [2024-10-15 01:08:28.233921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:15.698 [2024-10-15 01:08:28.233951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.698 [2024-10-15 01:08:28.236079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.698 [2024-10-15 01:08:28.236149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:15.698 BaseBdev1 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.698 BaseBdev2_malloc 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.698 true 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.698 [2024-10-15 01:08:28.274233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:15.698 [2024-10-15 01:08:28.274329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.698 [2024-10-15 01:08:28.274349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:15.698 [2024-10-15 01:08:28.274366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.698 [2024-10-15 01:08:28.276456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.698 [2024-10-15 01:08:28.276499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:15.698 BaseBdev2 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.698 [2024-10-15 01:08:28.286284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:15.698 [2024-10-15 01:08:28.288109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:15.698 [2024-10-15 01:08:28.288311] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:15.698 [2024-10-15 01:08:28.288330] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:15.698 [2024-10-15 01:08:28.288582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:15.698 [2024-10-15 01:08:28.288709] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:15.698 [2024-10-15 01:08:28.288721] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:15.698 [2024-10-15 01:08:28.288843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.698 "name": "raid_bdev1", 00:07:15.698 "uuid": "2ef96ea9-f2ca-461e-ad02-d1bfc8165d0a", 00:07:15.698 "strip_size_kb": 64, 00:07:15.698 "state": "online", 00:07:15.698 "raid_level": "raid0", 00:07:15.698 "superblock": true, 00:07:15.698 "num_base_bdevs": 2, 00:07:15.698 "num_base_bdevs_discovered": 2, 00:07:15.698 "num_base_bdevs_operational": 2, 00:07:15.698 "base_bdevs_list": [ 00:07:15.698 { 00:07:15.698 "name": "BaseBdev1", 00:07:15.698 "uuid": "4e108a65-41d0-5294-8a54-774df803c98d", 00:07:15.698 "is_configured": true, 00:07:15.698 "data_offset": 2048, 00:07:15.698 "data_size": 63488 00:07:15.698 }, 00:07:15.698 { 00:07:15.698 "name": "BaseBdev2", 00:07:15.698 "uuid": "d5cbf659-3555-5c35-97c9-d205a3348187", 00:07:15.698 "is_configured": true, 00:07:15.698 "data_offset": 2048, 00:07:15.698 "data_size": 63488 00:07:15.698 } 00:07:15.698 ] 00:07:15.698 }' 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.698 01:08:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.267 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:16.267 01:08:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:16.267 [2024-10-15 01:08:28.829705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:17.206 01:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:17.206 01:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.206 01:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.207 "name": "raid_bdev1", 00:07:17.207 "uuid": "2ef96ea9-f2ca-461e-ad02-d1bfc8165d0a", 00:07:17.207 "strip_size_kb": 64, 00:07:17.207 "state": "online", 00:07:17.207 "raid_level": "raid0", 00:07:17.207 "superblock": true, 00:07:17.207 "num_base_bdevs": 2, 00:07:17.207 "num_base_bdevs_discovered": 2, 00:07:17.207 "num_base_bdevs_operational": 2, 00:07:17.207 "base_bdevs_list": [ 00:07:17.207 { 00:07:17.207 "name": "BaseBdev1", 00:07:17.207 "uuid": "4e108a65-41d0-5294-8a54-774df803c98d", 00:07:17.207 "is_configured": true, 00:07:17.207 "data_offset": 2048, 00:07:17.207 "data_size": 63488 00:07:17.207 }, 00:07:17.207 { 00:07:17.207 "name": "BaseBdev2", 00:07:17.207 "uuid": "d5cbf659-3555-5c35-97c9-d205a3348187", 00:07:17.207 "is_configured": true, 00:07:17.207 "data_offset": 2048, 00:07:17.207 "data_size": 63488 00:07:17.207 } 00:07:17.207 ] 00:07:17.207 }' 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.207 01:08:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.466 01:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:17.466 01:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.466 01:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.466 [2024-10-15 01:08:30.181161] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:17.466 [2024-10-15 01:08:30.181204] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:17.466 [2024-10-15 01:08:30.183731] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.466 [2024-10-15 01:08:30.183776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.466 [2024-10-15 01:08:30.183816] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:17.467 [2024-10-15 01:08:30.183828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:17.467 { 00:07:17.467 "results": [ 00:07:17.467 { 00:07:17.467 "job": "raid_bdev1", 00:07:17.467 "core_mask": "0x1", 00:07:17.467 "workload": "randrw", 00:07:17.467 "percentage": 50, 00:07:17.467 "status": "finished", 00:07:17.467 "queue_depth": 1, 00:07:17.467 "io_size": 131072, 00:07:17.467 "runtime": 1.352255, 00:07:17.467 "iops": 17598.012209235683, 00:07:17.467 "mibps": 2199.7515261544604, 00:07:17.467 "io_failed": 1, 00:07:17.467 "io_timeout": 0, 00:07:17.467 "avg_latency_us": 78.60993757135658, 00:07:17.467 "min_latency_us": 24.817467248908297, 00:07:17.467 "max_latency_us": 1402.2986899563318 00:07:17.467 } 00:07:17.467 ], 00:07:17.467 "core_count": 1 00:07:17.467 } 00:07:17.467 01:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.467 01:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72683 00:07:17.467 01:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 72683 ']' 00:07:17.467 01:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 72683 00:07:17.467 01:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:17.727 01:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.727 01:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72683 00:07:17.727 killing process with pid 72683 00:07:17.727 01:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:17.727 01:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:17.727 01:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72683' 00:07:17.727 01:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 72683 00:07:17.727 01:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 72683 00:07:17.727 [2024-10-15 01:08:30.209687] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:17.727 [2024-10-15 01:08:30.224998] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:17.727 01:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LpQT104yrd 00:07:17.727 01:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:17.727 01:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:17.727 01:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:17.727 01:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:17.727 01:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:17.727 01:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:17.727 01:08:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:17.727 00:07:17.727 real 0m3.146s 00:07:17.727 user 0m4.055s 00:07:17.727 sys 0m0.437s 00:07:17.727 01:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.727 01:08:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.727 ************************************ 00:07:17.727 END TEST raid_write_error_test 00:07:17.727 ************************************ 00:07:17.987 01:08:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:17.987 01:08:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:17.987 01:08:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:17.987 01:08:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.987 01:08:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:17.987 ************************************ 00:07:17.987 START TEST raid_state_function_test 00:07:17.987 ************************************ 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72810 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72810' 00:07:17.987 Process raid pid: 72810 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72810 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72810 ']' 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.987 01:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.988 01:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.988 [2024-10-15 01:08:30.617099] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:07:17.988 [2024-10-15 01:08:30.617284] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.248 [2024-10-15 01:08:30.764569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.248 [2024-10-15 01:08:30.791357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.248 [2024-10-15 01:08:30.833812] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.248 [2024-10-15 01:08:30.833850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.820 [2024-10-15 01:08:31.467383] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:18.820 [2024-10-15 01:08:31.467432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:18.820 [2024-10-15 01:08:31.467447] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:18.820 [2024-10-15 01:08:31.467459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.820 "name": "Existed_Raid", 00:07:18.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.820 "strip_size_kb": 64, 00:07:18.820 "state": "configuring", 00:07:18.820 "raid_level": "concat", 00:07:18.820 "superblock": false, 00:07:18.820 "num_base_bdevs": 2, 00:07:18.820 "num_base_bdevs_discovered": 0, 00:07:18.820 "num_base_bdevs_operational": 2, 00:07:18.820 "base_bdevs_list": [ 00:07:18.820 { 00:07:18.820 "name": "BaseBdev1", 00:07:18.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.820 "is_configured": false, 00:07:18.820 "data_offset": 0, 00:07:18.820 "data_size": 0 00:07:18.820 }, 00:07:18.820 { 00:07:18.820 "name": "BaseBdev2", 00:07:18.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.820 "is_configured": false, 00:07:18.820 "data_offset": 0, 00:07:18.820 "data_size": 0 00:07:18.820 } 00:07:18.820 ] 00:07:18.820 }' 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.820 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.389 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:19.389 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.390 [2024-10-15 01:08:31.898599] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:19.390 [2024-10-15 01:08:31.898643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.390 [2024-10-15 01:08:31.906589] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:19.390 [2024-10-15 01:08:31.906629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:19.390 [2024-10-15 01:08:31.906637] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:19.390 [2024-10-15 01:08:31.906656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.390 [2024-10-15 01:08:31.923966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:19.390 BaseBdev1 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.390 [ 00:07:19.390 { 00:07:19.390 "name": "BaseBdev1", 00:07:19.390 "aliases": [ 00:07:19.390 "80986bc8-82ac-4533-ab72-7c08eb17b854" 00:07:19.390 ], 00:07:19.390 "product_name": "Malloc disk", 00:07:19.390 "block_size": 512, 00:07:19.390 "num_blocks": 65536, 00:07:19.390 "uuid": "80986bc8-82ac-4533-ab72-7c08eb17b854", 00:07:19.390 "assigned_rate_limits": { 00:07:19.390 "rw_ios_per_sec": 0, 00:07:19.390 "rw_mbytes_per_sec": 0, 00:07:19.390 "r_mbytes_per_sec": 0, 00:07:19.390 "w_mbytes_per_sec": 0 00:07:19.390 }, 00:07:19.390 "claimed": true, 00:07:19.390 "claim_type": "exclusive_write", 00:07:19.390 "zoned": false, 00:07:19.390 "supported_io_types": { 00:07:19.390 "read": true, 00:07:19.390 "write": true, 00:07:19.390 "unmap": true, 00:07:19.390 "flush": true, 00:07:19.390 "reset": true, 00:07:19.390 "nvme_admin": false, 00:07:19.390 "nvme_io": false, 00:07:19.390 "nvme_io_md": false, 00:07:19.390 "write_zeroes": true, 00:07:19.390 "zcopy": true, 00:07:19.390 "get_zone_info": false, 00:07:19.390 "zone_management": false, 00:07:19.390 "zone_append": false, 00:07:19.390 "compare": false, 00:07:19.390 "compare_and_write": false, 00:07:19.390 "abort": true, 00:07:19.390 "seek_hole": false, 00:07:19.390 "seek_data": false, 00:07:19.390 "copy": true, 00:07:19.390 "nvme_iov_md": false 00:07:19.390 }, 00:07:19.390 "memory_domains": [ 00:07:19.390 { 00:07:19.390 "dma_device_id": "system", 00:07:19.390 "dma_device_type": 1 00:07:19.390 }, 00:07:19.390 { 00:07:19.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.390 "dma_device_type": 2 00:07:19.390 } 00:07:19.390 ], 00:07:19.390 "driver_specific": {} 00:07:19.390 } 00:07:19.390 ] 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.390 01:08:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.390 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.390 "name": "Existed_Raid", 00:07:19.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.390 "strip_size_kb": 64, 00:07:19.390 "state": "configuring", 00:07:19.390 "raid_level": "concat", 00:07:19.390 "superblock": false, 00:07:19.390 "num_base_bdevs": 2, 00:07:19.390 "num_base_bdevs_discovered": 1, 00:07:19.390 "num_base_bdevs_operational": 2, 00:07:19.390 "base_bdevs_list": [ 00:07:19.390 { 00:07:19.390 "name": "BaseBdev1", 00:07:19.390 "uuid": "80986bc8-82ac-4533-ab72-7c08eb17b854", 00:07:19.390 "is_configured": true, 00:07:19.390 "data_offset": 0, 00:07:19.390 "data_size": 65536 00:07:19.390 }, 00:07:19.390 { 00:07:19.390 "name": "BaseBdev2", 00:07:19.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.390 "is_configured": false, 00:07:19.390 "data_offset": 0, 00:07:19.390 "data_size": 0 00:07:19.390 } 00:07:19.390 ] 00:07:19.390 }' 00:07:19.390 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.390 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.960 [2024-10-15 01:08:32.395243] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:19.960 [2024-10-15 01:08:32.395294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.960 [2024-10-15 01:08:32.403281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:19.960 [2024-10-15 01:08:32.405087] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:19.960 [2024-10-15 01:08:32.405125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.960 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.960 "name": "Existed_Raid", 00:07:19.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.960 "strip_size_kb": 64, 00:07:19.960 "state": "configuring", 00:07:19.960 "raid_level": "concat", 00:07:19.960 "superblock": false, 00:07:19.960 "num_base_bdevs": 2, 00:07:19.960 "num_base_bdevs_discovered": 1, 00:07:19.960 "num_base_bdevs_operational": 2, 00:07:19.960 "base_bdevs_list": [ 00:07:19.960 { 00:07:19.960 "name": "BaseBdev1", 00:07:19.960 "uuid": "80986bc8-82ac-4533-ab72-7c08eb17b854", 00:07:19.960 "is_configured": true, 00:07:19.960 "data_offset": 0, 00:07:19.960 "data_size": 65536 00:07:19.960 }, 00:07:19.960 { 00:07:19.960 "name": "BaseBdev2", 00:07:19.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.960 "is_configured": false, 00:07:19.960 "data_offset": 0, 00:07:19.960 "data_size": 0 00:07:19.960 } 00:07:19.961 ] 00:07:19.961 }' 00:07:19.961 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.961 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.220 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:20.220 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.220 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.220 [2024-10-15 01:08:32.805553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:20.220 [2024-10-15 01:08:32.805598] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:20.220 [2024-10-15 01:08:32.805606] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:20.221 [2024-10-15 01:08:32.805881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:20.221 [2024-10-15 01:08:32.806029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:20.221 [2024-10-15 01:08:32.806049] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:20.221 [2024-10-15 01:08:32.806266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.221 BaseBdev2 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.221 [ 00:07:20.221 { 00:07:20.221 "name": "BaseBdev2", 00:07:20.221 "aliases": [ 00:07:20.221 "f4dd14bd-72b7-460c-a8f7-0f83850a9993" 00:07:20.221 ], 00:07:20.221 "product_name": "Malloc disk", 00:07:20.221 "block_size": 512, 00:07:20.221 "num_blocks": 65536, 00:07:20.221 "uuid": "f4dd14bd-72b7-460c-a8f7-0f83850a9993", 00:07:20.221 "assigned_rate_limits": { 00:07:20.221 "rw_ios_per_sec": 0, 00:07:20.221 "rw_mbytes_per_sec": 0, 00:07:20.221 "r_mbytes_per_sec": 0, 00:07:20.221 "w_mbytes_per_sec": 0 00:07:20.221 }, 00:07:20.221 "claimed": true, 00:07:20.221 "claim_type": "exclusive_write", 00:07:20.221 "zoned": false, 00:07:20.221 "supported_io_types": { 00:07:20.221 "read": true, 00:07:20.221 "write": true, 00:07:20.221 "unmap": true, 00:07:20.221 "flush": true, 00:07:20.221 "reset": true, 00:07:20.221 "nvme_admin": false, 00:07:20.221 "nvme_io": false, 00:07:20.221 "nvme_io_md": false, 00:07:20.221 "write_zeroes": true, 00:07:20.221 "zcopy": true, 00:07:20.221 "get_zone_info": false, 00:07:20.221 "zone_management": false, 00:07:20.221 "zone_append": false, 00:07:20.221 "compare": false, 00:07:20.221 "compare_and_write": false, 00:07:20.221 "abort": true, 00:07:20.221 "seek_hole": false, 00:07:20.221 "seek_data": false, 00:07:20.221 "copy": true, 00:07:20.221 "nvme_iov_md": false 00:07:20.221 }, 00:07:20.221 "memory_domains": [ 00:07:20.221 { 00:07:20.221 "dma_device_id": "system", 00:07:20.221 "dma_device_type": 1 00:07:20.221 }, 00:07:20.221 { 00:07:20.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.221 "dma_device_type": 2 00:07:20.221 } 00:07:20.221 ], 00:07:20.221 "driver_specific": {} 00:07:20.221 } 00:07:20.221 ] 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.221 "name": "Existed_Raid", 00:07:20.221 "uuid": "414d293e-e005-4c21-b355-d57cd75c9e62", 00:07:20.221 "strip_size_kb": 64, 00:07:20.221 "state": "online", 00:07:20.221 "raid_level": "concat", 00:07:20.221 "superblock": false, 00:07:20.221 "num_base_bdevs": 2, 00:07:20.221 "num_base_bdevs_discovered": 2, 00:07:20.221 "num_base_bdevs_operational": 2, 00:07:20.221 "base_bdevs_list": [ 00:07:20.221 { 00:07:20.221 "name": "BaseBdev1", 00:07:20.221 "uuid": "80986bc8-82ac-4533-ab72-7c08eb17b854", 00:07:20.221 "is_configured": true, 00:07:20.221 "data_offset": 0, 00:07:20.221 "data_size": 65536 00:07:20.221 }, 00:07:20.221 { 00:07:20.221 "name": "BaseBdev2", 00:07:20.221 "uuid": "f4dd14bd-72b7-460c-a8f7-0f83850a9993", 00:07:20.221 "is_configured": true, 00:07:20.221 "data_offset": 0, 00:07:20.221 "data_size": 65536 00:07:20.221 } 00:07:20.221 ] 00:07:20.221 }' 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.221 01:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.792 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:20.792 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:20.792 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:20.792 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:20.792 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:20.792 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:20.792 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:20.792 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.792 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.792 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:20.792 [2024-10-15 01:08:33.257102] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.792 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.792 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:20.792 "name": "Existed_Raid", 00:07:20.792 "aliases": [ 00:07:20.792 "414d293e-e005-4c21-b355-d57cd75c9e62" 00:07:20.792 ], 00:07:20.792 "product_name": "Raid Volume", 00:07:20.792 "block_size": 512, 00:07:20.792 "num_blocks": 131072, 00:07:20.792 "uuid": "414d293e-e005-4c21-b355-d57cd75c9e62", 00:07:20.792 "assigned_rate_limits": { 00:07:20.792 "rw_ios_per_sec": 0, 00:07:20.792 "rw_mbytes_per_sec": 0, 00:07:20.792 "r_mbytes_per_sec": 0, 00:07:20.792 "w_mbytes_per_sec": 0 00:07:20.792 }, 00:07:20.792 "claimed": false, 00:07:20.792 "zoned": false, 00:07:20.792 "supported_io_types": { 00:07:20.792 "read": true, 00:07:20.792 "write": true, 00:07:20.792 "unmap": true, 00:07:20.792 "flush": true, 00:07:20.792 "reset": true, 00:07:20.792 "nvme_admin": false, 00:07:20.792 "nvme_io": false, 00:07:20.792 "nvme_io_md": false, 00:07:20.792 "write_zeroes": true, 00:07:20.792 "zcopy": false, 00:07:20.792 "get_zone_info": false, 00:07:20.792 "zone_management": false, 00:07:20.792 "zone_append": false, 00:07:20.792 "compare": false, 00:07:20.792 "compare_and_write": false, 00:07:20.792 "abort": false, 00:07:20.792 "seek_hole": false, 00:07:20.792 "seek_data": false, 00:07:20.792 "copy": false, 00:07:20.792 "nvme_iov_md": false 00:07:20.792 }, 00:07:20.792 "memory_domains": [ 00:07:20.792 { 00:07:20.792 "dma_device_id": "system", 00:07:20.792 "dma_device_type": 1 00:07:20.792 }, 00:07:20.792 { 00:07:20.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.792 "dma_device_type": 2 00:07:20.792 }, 00:07:20.792 { 00:07:20.792 "dma_device_id": "system", 00:07:20.792 "dma_device_type": 1 00:07:20.792 }, 00:07:20.792 { 00:07:20.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.793 "dma_device_type": 2 00:07:20.793 } 00:07:20.793 ], 00:07:20.793 "driver_specific": { 00:07:20.793 "raid": { 00:07:20.793 "uuid": "414d293e-e005-4c21-b355-d57cd75c9e62", 00:07:20.793 "strip_size_kb": 64, 00:07:20.793 "state": "online", 00:07:20.793 "raid_level": "concat", 00:07:20.793 "superblock": false, 00:07:20.793 "num_base_bdevs": 2, 00:07:20.793 "num_base_bdevs_discovered": 2, 00:07:20.793 "num_base_bdevs_operational": 2, 00:07:20.793 "base_bdevs_list": [ 00:07:20.793 { 00:07:20.793 "name": "BaseBdev1", 00:07:20.793 "uuid": "80986bc8-82ac-4533-ab72-7c08eb17b854", 00:07:20.793 "is_configured": true, 00:07:20.793 "data_offset": 0, 00:07:20.793 "data_size": 65536 00:07:20.793 }, 00:07:20.793 { 00:07:20.793 "name": "BaseBdev2", 00:07:20.793 "uuid": "f4dd14bd-72b7-460c-a8f7-0f83850a9993", 00:07:20.793 "is_configured": true, 00:07:20.793 "data_offset": 0, 00:07:20.793 "data_size": 65536 00:07:20.793 } 00:07:20.793 ] 00:07:20.793 } 00:07:20.793 } 00:07:20.793 }' 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:20.793 BaseBdev2' 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.793 [2024-10-15 01:08:33.460568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:20.793 [2024-10-15 01:08:33.460600] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:20.793 [2024-10-15 01:08:33.460666] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.793 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.053 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.053 "name": "Existed_Raid", 00:07:21.053 "uuid": "414d293e-e005-4c21-b355-d57cd75c9e62", 00:07:21.053 "strip_size_kb": 64, 00:07:21.053 "state": "offline", 00:07:21.053 "raid_level": "concat", 00:07:21.053 "superblock": false, 00:07:21.053 "num_base_bdevs": 2, 00:07:21.053 "num_base_bdevs_discovered": 1, 00:07:21.053 "num_base_bdevs_operational": 1, 00:07:21.053 "base_bdevs_list": [ 00:07:21.053 { 00:07:21.053 "name": null, 00:07:21.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.053 "is_configured": false, 00:07:21.053 "data_offset": 0, 00:07:21.053 "data_size": 65536 00:07:21.053 }, 00:07:21.053 { 00:07:21.053 "name": "BaseBdev2", 00:07:21.053 "uuid": "f4dd14bd-72b7-460c-a8f7-0f83850a9993", 00:07:21.053 "is_configured": true, 00:07:21.053 "data_offset": 0, 00:07:21.053 "data_size": 65536 00:07:21.053 } 00:07:21.053 ] 00:07:21.053 }' 00:07:21.053 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.053 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.313 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:21.313 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:21.313 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:21.313 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.313 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.313 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.313 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.313 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:21.313 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:21.313 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:21.313 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.313 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.313 [2024-10-15 01:08:33.951314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:21.313 [2024-10-15 01:08:33.951371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:21.313 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.313 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:21.313 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:21.313 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.313 01:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:21.313 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.313 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.313 01:08:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.313 01:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:21.313 01:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:21.313 01:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:21.313 01:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72810 00:07:21.313 01:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72810 ']' 00:07:21.313 01:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72810 00:07:21.313 01:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:21.313 01:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.313 01:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72810 00:07:21.573 01:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:21.573 01:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:21.573 01:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72810' 00:07:21.573 killing process with pid 72810 00:07:21.573 01:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72810 00:07:21.573 [2024-10-15 01:08:34.055650] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:21.573 01:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72810 00:07:21.573 [2024-10-15 01:08:34.056709] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.573 01:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:21.573 00:07:21.573 real 0m3.764s 00:07:21.573 user 0m5.956s 00:07:21.573 sys 0m0.749s 00:07:21.573 01:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.573 01:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.573 ************************************ 00:07:21.573 END TEST raid_state_function_test 00:07:21.573 ************************************ 00:07:21.834 01:08:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:21.834 01:08:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:21.834 01:08:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.834 01:08:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.834 ************************************ 00:07:21.834 START TEST raid_state_function_test_sb 00:07:21.834 ************************************ 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73047 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:21.834 Process raid pid: 73047 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73047' 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73047 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73047 ']' 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.834 01:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.834 [2024-10-15 01:08:34.428006] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:07:21.834 [2024-10-15 01:08:34.428130] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.094 [2024-10-15 01:08:34.572595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.094 [2024-10-15 01:08:34.599279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.094 [2024-10-15 01:08:34.642527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.094 [2024-10-15 01:08:34.642567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.664 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.664 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:22.664 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:22.664 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.664 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.664 [2024-10-15 01:08:35.252305] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:22.664 [2024-10-15 01:08:35.252350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:22.664 [2024-10-15 01:08:35.252368] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.664 [2024-10-15 01:08:35.252379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.664 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.664 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:22.664 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.664 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.664 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:22.664 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.664 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.664 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.664 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.664 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.664 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.664 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.664 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.664 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.664 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.664 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.665 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.665 "name": "Existed_Raid", 00:07:22.665 "uuid": "fd58a13b-1f36-4ace-b6b2-7424ddf6c9d1", 00:07:22.665 "strip_size_kb": 64, 00:07:22.665 "state": "configuring", 00:07:22.665 "raid_level": "concat", 00:07:22.665 "superblock": true, 00:07:22.665 "num_base_bdevs": 2, 00:07:22.665 "num_base_bdevs_discovered": 0, 00:07:22.665 "num_base_bdevs_operational": 2, 00:07:22.665 "base_bdevs_list": [ 00:07:22.665 { 00:07:22.665 "name": "BaseBdev1", 00:07:22.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.665 "is_configured": false, 00:07:22.665 "data_offset": 0, 00:07:22.665 "data_size": 0 00:07:22.665 }, 00:07:22.665 { 00:07:22.665 "name": "BaseBdev2", 00:07:22.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.665 "is_configured": false, 00:07:22.665 "data_offset": 0, 00:07:22.665 "data_size": 0 00:07:22.665 } 00:07:22.665 ] 00:07:22.665 }' 00:07:22.665 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.665 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.234 [2024-10-15 01:08:35.667472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:23.234 [2024-10-15 01:08:35.667515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.234 [2024-10-15 01:08:35.679479] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:23.234 [2024-10-15 01:08:35.679516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:23.234 [2024-10-15 01:08:35.679525] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:23.234 [2024-10-15 01:08:35.679543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.234 [2024-10-15 01:08:35.700382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:23.234 BaseBdev1 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.234 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.234 [ 00:07:23.234 { 00:07:23.234 "name": "BaseBdev1", 00:07:23.234 "aliases": [ 00:07:23.234 "265a1cb2-67c5-451f-8444-c082526da3bf" 00:07:23.234 ], 00:07:23.234 "product_name": "Malloc disk", 00:07:23.234 "block_size": 512, 00:07:23.234 "num_blocks": 65536, 00:07:23.234 "uuid": "265a1cb2-67c5-451f-8444-c082526da3bf", 00:07:23.234 "assigned_rate_limits": { 00:07:23.234 "rw_ios_per_sec": 0, 00:07:23.234 "rw_mbytes_per_sec": 0, 00:07:23.235 "r_mbytes_per_sec": 0, 00:07:23.235 "w_mbytes_per_sec": 0 00:07:23.235 }, 00:07:23.235 "claimed": true, 00:07:23.235 "claim_type": "exclusive_write", 00:07:23.235 "zoned": false, 00:07:23.235 "supported_io_types": { 00:07:23.235 "read": true, 00:07:23.235 "write": true, 00:07:23.235 "unmap": true, 00:07:23.235 "flush": true, 00:07:23.235 "reset": true, 00:07:23.235 "nvme_admin": false, 00:07:23.235 "nvme_io": false, 00:07:23.235 "nvme_io_md": false, 00:07:23.235 "write_zeroes": true, 00:07:23.235 "zcopy": true, 00:07:23.235 "get_zone_info": false, 00:07:23.235 "zone_management": false, 00:07:23.235 "zone_append": false, 00:07:23.235 "compare": false, 00:07:23.235 "compare_and_write": false, 00:07:23.235 "abort": true, 00:07:23.235 "seek_hole": false, 00:07:23.235 "seek_data": false, 00:07:23.235 "copy": true, 00:07:23.235 "nvme_iov_md": false 00:07:23.235 }, 00:07:23.235 "memory_domains": [ 00:07:23.235 { 00:07:23.235 "dma_device_id": "system", 00:07:23.235 "dma_device_type": 1 00:07:23.235 }, 00:07:23.235 { 00:07:23.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.235 "dma_device_type": 2 00:07:23.235 } 00:07:23.235 ], 00:07:23.235 "driver_specific": {} 00:07:23.235 } 00:07:23.235 ] 00:07:23.235 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.235 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:23.235 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:23.235 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.235 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:23.235 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:23.235 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.235 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.235 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.235 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.235 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.235 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.235 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.235 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.235 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.235 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.235 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.235 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.235 "name": "Existed_Raid", 00:07:23.235 "uuid": "49015b17-6490-4abd-87ee-7ae8d4ce2088", 00:07:23.235 "strip_size_kb": 64, 00:07:23.235 "state": "configuring", 00:07:23.235 "raid_level": "concat", 00:07:23.235 "superblock": true, 00:07:23.235 "num_base_bdevs": 2, 00:07:23.235 "num_base_bdevs_discovered": 1, 00:07:23.235 "num_base_bdevs_operational": 2, 00:07:23.235 "base_bdevs_list": [ 00:07:23.235 { 00:07:23.235 "name": "BaseBdev1", 00:07:23.235 "uuid": "265a1cb2-67c5-451f-8444-c082526da3bf", 00:07:23.235 "is_configured": true, 00:07:23.235 "data_offset": 2048, 00:07:23.235 "data_size": 63488 00:07:23.235 }, 00:07:23.235 { 00:07:23.235 "name": "BaseBdev2", 00:07:23.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.235 "is_configured": false, 00:07:23.235 "data_offset": 0, 00:07:23.235 "data_size": 0 00:07:23.235 } 00:07:23.235 ] 00:07:23.235 }' 00:07:23.235 01:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.235 01:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.494 [2024-10-15 01:08:36.159635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:23.494 [2024-10-15 01:08:36.159692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.494 [2024-10-15 01:08:36.171658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:23.494 [2024-10-15 01:08:36.173466] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:23.494 [2024-10-15 01:08:36.173500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.494 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.753 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.753 "name": "Existed_Raid", 00:07:23.753 "uuid": "12fb4020-987c-4aa6-8c68-47de52caa40f", 00:07:23.753 "strip_size_kb": 64, 00:07:23.753 "state": "configuring", 00:07:23.753 "raid_level": "concat", 00:07:23.753 "superblock": true, 00:07:23.753 "num_base_bdevs": 2, 00:07:23.753 "num_base_bdevs_discovered": 1, 00:07:23.753 "num_base_bdevs_operational": 2, 00:07:23.753 "base_bdevs_list": [ 00:07:23.753 { 00:07:23.753 "name": "BaseBdev1", 00:07:23.753 "uuid": "265a1cb2-67c5-451f-8444-c082526da3bf", 00:07:23.753 "is_configured": true, 00:07:23.753 "data_offset": 2048, 00:07:23.753 "data_size": 63488 00:07:23.753 }, 00:07:23.753 { 00:07:23.753 "name": "BaseBdev2", 00:07:23.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.753 "is_configured": false, 00:07:23.753 "data_offset": 0, 00:07:23.753 "data_size": 0 00:07:23.753 } 00:07:23.753 ] 00:07:23.753 }' 00:07:23.753 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.753 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.012 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:24.012 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.012 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.012 [2024-10-15 01:08:36.610099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:24.012 [2024-10-15 01:08:36.610300] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:24.012 [2024-10-15 01:08:36.610319] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:24.012 [2024-10-15 01:08:36.610592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:24.012 [2024-10-15 01:08:36.610733] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:24.012 [2024-10-15 01:08:36.610754] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:24.012 BaseBdev2 00:07:24.012 [2024-10-15 01:08:36.610864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.012 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.012 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:24.012 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.013 [ 00:07:24.013 { 00:07:24.013 "name": "BaseBdev2", 00:07:24.013 "aliases": [ 00:07:24.013 "abbe8802-11e0-412a-abcc-471d8d3ef1cf" 00:07:24.013 ], 00:07:24.013 "product_name": "Malloc disk", 00:07:24.013 "block_size": 512, 00:07:24.013 "num_blocks": 65536, 00:07:24.013 "uuid": "abbe8802-11e0-412a-abcc-471d8d3ef1cf", 00:07:24.013 "assigned_rate_limits": { 00:07:24.013 "rw_ios_per_sec": 0, 00:07:24.013 "rw_mbytes_per_sec": 0, 00:07:24.013 "r_mbytes_per_sec": 0, 00:07:24.013 "w_mbytes_per_sec": 0 00:07:24.013 }, 00:07:24.013 "claimed": true, 00:07:24.013 "claim_type": "exclusive_write", 00:07:24.013 "zoned": false, 00:07:24.013 "supported_io_types": { 00:07:24.013 "read": true, 00:07:24.013 "write": true, 00:07:24.013 "unmap": true, 00:07:24.013 "flush": true, 00:07:24.013 "reset": true, 00:07:24.013 "nvme_admin": false, 00:07:24.013 "nvme_io": false, 00:07:24.013 "nvme_io_md": false, 00:07:24.013 "write_zeroes": true, 00:07:24.013 "zcopy": true, 00:07:24.013 "get_zone_info": false, 00:07:24.013 "zone_management": false, 00:07:24.013 "zone_append": false, 00:07:24.013 "compare": false, 00:07:24.013 "compare_and_write": false, 00:07:24.013 "abort": true, 00:07:24.013 "seek_hole": false, 00:07:24.013 "seek_data": false, 00:07:24.013 "copy": true, 00:07:24.013 "nvme_iov_md": false 00:07:24.013 }, 00:07:24.013 "memory_domains": [ 00:07:24.013 { 00:07:24.013 "dma_device_id": "system", 00:07:24.013 "dma_device_type": 1 00:07:24.013 }, 00:07:24.013 { 00:07:24.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.013 "dma_device_type": 2 00:07:24.013 } 00:07:24.013 ], 00:07:24.013 "driver_specific": {} 00:07:24.013 } 00:07:24.013 ] 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.013 "name": "Existed_Raid", 00:07:24.013 "uuid": "12fb4020-987c-4aa6-8c68-47de52caa40f", 00:07:24.013 "strip_size_kb": 64, 00:07:24.013 "state": "online", 00:07:24.013 "raid_level": "concat", 00:07:24.013 "superblock": true, 00:07:24.013 "num_base_bdevs": 2, 00:07:24.013 "num_base_bdevs_discovered": 2, 00:07:24.013 "num_base_bdevs_operational": 2, 00:07:24.013 "base_bdevs_list": [ 00:07:24.013 { 00:07:24.013 "name": "BaseBdev1", 00:07:24.013 "uuid": "265a1cb2-67c5-451f-8444-c082526da3bf", 00:07:24.013 "is_configured": true, 00:07:24.013 "data_offset": 2048, 00:07:24.013 "data_size": 63488 00:07:24.013 }, 00:07:24.013 { 00:07:24.013 "name": "BaseBdev2", 00:07:24.013 "uuid": "abbe8802-11e0-412a-abcc-471d8d3ef1cf", 00:07:24.013 "is_configured": true, 00:07:24.013 "data_offset": 2048, 00:07:24.013 "data_size": 63488 00:07:24.013 } 00:07:24.013 ] 00:07:24.013 }' 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.013 01:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.582 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:24.582 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:24.582 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:24.582 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:24.582 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:24.582 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:24.582 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:24.582 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.582 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.582 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:24.582 [2024-10-15 01:08:37.089559] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.582 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.582 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:24.582 "name": "Existed_Raid", 00:07:24.582 "aliases": [ 00:07:24.582 "12fb4020-987c-4aa6-8c68-47de52caa40f" 00:07:24.582 ], 00:07:24.582 "product_name": "Raid Volume", 00:07:24.582 "block_size": 512, 00:07:24.582 "num_blocks": 126976, 00:07:24.582 "uuid": "12fb4020-987c-4aa6-8c68-47de52caa40f", 00:07:24.582 "assigned_rate_limits": { 00:07:24.582 "rw_ios_per_sec": 0, 00:07:24.583 "rw_mbytes_per_sec": 0, 00:07:24.583 "r_mbytes_per_sec": 0, 00:07:24.583 "w_mbytes_per_sec": 0 00:07:24.583 }, 00:07:24.583 "claimed": false, 00:07:24.583 "zoned": false, 00:07:24.583 "supported_io_types": { 00:07:24.583 "read": true, 00:07:24.583 "write": true, 00:07:24.583 "unmap": true, 00:07:24.583 "flush": true, 00:07:24.583 "reset": true, 00:07:24.583 "nvme_admin": false, 00:07:24.583 "nvme_io": false, 00:07:24.583 "nvme_io_md": false, 00:07:24.583 "write_zeroes": true, 00:07:24.583 "zcopy": false, 00:07:24.583 "get_zone_info": false, 00:07:24.583 "zone_management": false, 00:07:24.583 "zone_append": false, 00:07:24.583 "compare": false, 00:07:24.583 "compare_and_write": false, 00:07:24.583 "abort": false, 00:07:24.583 "seek_hole": false, 00:07:24.583 "seek_data": false, 00:07:24.583 "copy": false, 00:07:24.583 "nvme_iov_md": false 00:07:24.583 }, 00:07:24.583 "memory_domains": [ 00:07:24.583 { 00:07:24.583 "dma_device_id": "system", 00:07:24.583 "dma_device_type": 1 00:07:24.583 }, 00:07:24.583 { 00:07:24.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.583 "dma_device_type": 2 00:07:24.583 }, 00:07:24.583 { 00:07:24.583 "dma_device_id": "system", 00:07:24.583 "dma_device_type": 1 00:07:24.583 }, 00:07:24.583 { 00:07:24.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.583 "dma_device_type": 2 00:07:24.583 } 00:07:24.583 ], 00:07:24.583 "driver_specific": { 00:07:24.583 "raid": { 00:07:24.583 "uuid": "12fb4020-987c-4aa6-8c68-47de52caa40f", 00:07:24.583 "strip_size_kb": 64, 00:07:24.583 "state": "online", 00:07:24.583 "raid_level": "concat", 00:07:24.583 "superblock": true, 00:07:24.583 "num_base_bdevs": 2, 00:07:24.583 "num_base_bdevs_discovered": 2, 00:07:24.583 "num_base_bdevs_operational": 2, 00:07:24.583 "base_bdevs_list": [ 00:07:24.583 { 00:07:24.583 "name": "BaseBdev1", 00:07:24.583 "uuid": "265a1cb2-67c5-451f-8444-c082526da3bf", 00:07:24.583 "is_configured": true, 00:07:24.583 "data_offset": 2048, 00:07:24.583 "data_size": 63488 00:07:24.583 }, 00:07:24.583 { 00:07:24.583 "name": "BaseBdev2", 00:07:24.583 "uuid": "abbe8802-11e0-412a-abcc-471d8d3ef1cf", 00:07:24.583 "is_configured": true, 00:07:24.583 "data_offset": 2048, 00:07:24.583 "data_size": 63488 00:07:24.583 } 00:07:24.583 ] 00:07:24.583 } 00:07:24.583 } 00:07:24.583 }' 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:24.583 BaseBdev2' 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.583 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.583 [2024-10-15 01:08:37.297012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:24.583 [2024-10-15 01:08:37.297041] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:24.583 [2024-10-15 01:08:37.297103] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.842 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.842 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:24.842 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:24.842 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:24.842 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:24.842 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:24.842 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:24.842 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.842 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:24.843 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:24.843 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.843 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:24.843 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.843 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.843 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.843 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.843 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.843 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.843 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.843 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.843 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.843 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.843 "name": "Existed_Raid", 00:07:24.843 "uuid": "12fb4020-987c-4aa6-8c68-47de52caa40f", 00:07:24.843 "strip_size_kb": 64, 00:07:24.843 "state": "offline", 00:07:24.843 "raid_level": "concat", 00:07:24.843 "superblock": true, 00:07:24.843 "num_base_bdevs": 2, 00:07:24.843 "num_base_bdevs_discovered": 1, 00:07:24.843 "num_base_bdevs_operational": 1, 00:07:24.843 "base_bdevs_list": [ 00:07:24.843 { 00:07:24.843 "name": null, 00:07:24.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.843 "is_configured": false, 00:07:24.843 "data_offset": 0, 00:07:24.843 "data_size": 63488 00:07:24.843 }, 00:07:24.843 { 00:07:24.843 "name": "BaseBdev2", 00:07:24.843 "uuid": "abbe8802-11e0-412a-abcc-471d8d3ef1cf", 00:07:24.843 "is_configured": true, 00:07:24.843 "data_offset": 2048, 00:07:24.843 "data_size": 63488 00:07:24.843 } 00:07:24.843 ] 00:07:24.843 }' 00:07:24.843 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.843 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.102 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:25.102 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:25.102 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.102 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.102 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.102 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:25.102 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.102 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:25.102 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:25.103 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:25.103 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.103 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.103 [2024-10-15 01:08:37.779606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:25.103 [2024-10-15 01:08:37.779660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:25.103 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.103 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:25.103 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:25.103 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:25.103 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.103 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.103 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.103 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.103 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:25.103 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:25.103 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:25.103 01:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73047 00:07:25.103 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73047 ']' 00:07:25.103 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73047 00:07:25.362 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:25.362 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.362 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73047 00:07:25.362 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:25.362 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:25.362 killing process with pid 73047 00:07:25.362 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73047' 00:07:25.362 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73047 00:07:25.362 [2024-10-15 01:08:37.867490] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:25.362 01:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73047 00:07:25.362 [2024-10-15 01:08:37.868470] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.362 01:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:25.362 00:07:25.362 real 0m3.739s 00:07:25.362 user 0m5.890s 00:07:25.362 sys 0m0.751s 00:07:25.362 01:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.362 01:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.362 ************************************ 00:07:25.362 END TEST raid_state_function_test_sb 00:07:25.362 ************************************ 00:07:25.622 01:08:38 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:25.622 01:08:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:25.622 01:08:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.622 01:08:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.622 ************************************ 00:07:25.622 START TEST raid_superblock_test 00:07:25.622 ************************************ 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73282 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73282 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73282 ']' 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.622 01:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.622 [2024-10-15 01:08:38.235020] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:07:25.622 [2024-10-15 01:08:38.235161] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73282 ] 00:07:25.881 [2024-10-15 01:08:38.359910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.881 [2024-10-15 01:08:38.385315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.881 [2024-10-15 01:08:38.427453] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.881 [2024-10-15 01:08:38.427490] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.450 malloc1 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.450 [2024-10-15 01:08:39.069599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:26.450 [2024-10-15 01:08:39.069654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.450 [2024-10-15 01:08:39.069673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:26.450 [2024-10-15 01:08:39.069683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.450 [2024-10-15 01:08:39.071737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.450 [2024-10-15 01:08:39.071773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:26.450 pt1 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.450 malloc2 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.450 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.451 [2024-10-15 01:08:39.097984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:26.451 [2024-10-15 01:08:39.098044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.451 [2024-10-15 01:08:39.098060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:26.451 [2024-10-15 01:08:39.098069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.451 [2024-10-15 01:08:39.100067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.451 [2024-10-15 01:08:39.100100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:26.451 pt2 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.451 [2024-10-15 01:08:39.110003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:26.451 [2024-10-15 01:08:39.111830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:26.451 [2024-10-15 01:08:39.111965] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:26.451 [2024-10-15 01:08:39.111984] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:26.451 [2024-10-15 01:08:39.112222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:26.451 [2024-10-15 01:08:39.112374] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:26.451 [2024-10-15 01:08:39.112391] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:26.451 [2024-10-15 01:08:39.112503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.451 "name": "raid_bdev1", 00:07:26.451 "uuid": "05119a90-8417-4f5b-bcc6-40e02262b174", 00:07:26.451 "strip_size_kb": 64, 00:07:26.451 "state": "online", 00:07:26.451 "raid_level": "concat", 00:07:26.451 "superblock": true, 00:07:26.451 "num_base_bdevs": 2, 00:07:26.451 "num_base_bdevs_discovered": 2, 00:07:26.451 "num_base_bdevs_operational": 2, 00:07:26.451 "base_bdevs_list": [ 00:07:26.451 { 00:07:26.451 "name": "pt1", 00:07:26.451 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:26.451 "is_configured": true, 00:07:26.451 "data_offset": 2048, 00:07:26.451 "data_size": 63488 00:07:26.451 }, 00:07:26.451 { 00:07:26.451 "name": "pt2", 00:07:26.451 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:26.451 "is_configured": true, 00:07:26.451 "data_offset": 2048, 00:07:26.451 "data_size": 63488 00:07:26.451 } 00:07:26.451 ] 00:07:26.451 }' 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.451 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.019 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:27.019 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:27.020 [2024-10-15 01:08:39.533565] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:27.020 "name": "raid_bdev1", 00:07:27.020 "aliases": [ 00:07:27.020 "05119a90-8417-4f5b-bcc6-40e02262b174" 00:07:27.020 ], 00:07:27.020 "product_name": "Raid Volume", 00:07:27.020 "block_size": 512, 00:07:27.020 "num_blocks": 126976, 00:07:27.020 "uuid": "05119a90-8417-4f5b-bcc6-40e02262b174", 00:07:27.020 "assigned_rate_limits": { 00:07:27.020 "rw_ios_per_sec": 0, 00:07:27.020 "rw_mbytes_per_sec": 0, 00:07:27.020 "r_mbytes_per_sec": 0, 00:07:27.020 "w_mbytes_per_sec": 0 00:07:27.020 }, 00:07:27.020 "claimed": false, 00:07:27.020 "zoned": false, 00:07:27.020 "supported_io_types": { 00:07:27.020 "read": true, 00:07:27.020 "write": true, 00:07:27.020 "unmap": true, 00:07:27.020 "flush": true, 00:07:27.020 "reset": true, 00:07:27.020 "nvme_admin": false, 00:07:27.020 "nvme_io": false, 00:07:27.020 "nvme_io_md": false, 00:07:27.020 "write_zeroes": true, 00:07:27.020 "zcopy": false, 00:07:27.020 "get_zone_info": false, 00:07:27.020 "zone_management": false, 00:07:27.020 "zone_append": false, 00:07:27.020 "compare": false, 00:07:27.020 "compare_and_write": false, 00:07:27.020 "abort": false, 00:07:27.020 "seek_hole": false, 00:07:27.020 "seek_data": false, 00:07:27.020 "copy": false, 00:07:27.020 "nvme_iov_md": false 00:07:27.020 }, 00:07:27.020 "memory_domains": [ 00:07:27.020 { 00:07:27.020 "dma_device_id": "system", 00:07:27.020 "dma_device_type": 1 00:07:27.020 }, 00:07:27.020 { 00:07:27.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.020 "dma_device_type": 2 00:07:27.020 }, 00:07:27.020 { 00:07:27.020 "dma_device_id": "system", 00:07:27.020 "dma_device_type": 1 00:07:27.020 }, 00:07:27.020 { 00:07:27.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.020 "dma_device_type": 2 00:07:27.020 } 00:07:27.020 ], 00:07:27.020 "driver_specific": { 00:07:27.020 "raid": { 00:07:27.020 "uuid": "05119a90-8417-4f5b-bcc6-40e02262b174", 00:07:27.020 "strip_size_kb": 64, 00:07:27.020 "state": "online", 00:07:27.020 "raid_level": "concat", 00:07:27.020 "superblock": true, 00:07:27.020 "num_base_bdevs": 2, 00:07:27.020 "num_base_bdevs_discovered": 2, 00:07:27.020 "num_base_bdevs_operational": 2, 00:07:27.020 "base_bdevs_list": [ 00:07:27.020 { 00:07:27.020 "name": "pt1", 00:07:27.020 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:27.020 "is_configured": true, 00:07:27.020 "data_offset": 2048, 00:07:27.020 "data_size": 63488 00:07:27.020 }, 00:07:27.020 { 00:07:27.020 "name": "pt2", 00:07:27.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:27.020 "is_configured": true, 00:07:27.020 "data_offset": 2048, 00:07:27.020 "data_size": 63488 00:07:27.020 } 00:07:27.020 ] 00:07:27.020 } 00:07:27.020 } 00:07:27.020 }' 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:27.020 pt2' 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.020 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.280 [2024-10-15 01:08:39.789007] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=05119a90-8417-4f5b-bcc6-40e02262b174 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 05119a90-8417-4f5b-bcc6-40e02262b174 ']' 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.280 [2024-10-15 01:08:39.828713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:27.280 [2024-10-15 01:08:39.828743] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:27.280 [2024-10-15 01:08:39.828806] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.280 [2024-10-15 01:08:39.828858] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.280 [2024-10-15 01:08:39.828872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.280 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.280 [2024-10-15 01:08:39.956532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:27.280 [2024-10-15 01:08:39.958374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:27.280 [2024-10-15 01:08:39.958436] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:27.280 [2024-10-15 01:08:39.958483] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:27.280 [2024-10-15 01:08:39.958498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:27.280 [2024-10-15 01:08:39.958507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:27.280 request: 00:07:27.280 { 00:07:27.280 "name": "raid_bdev1", 00:07:27.280 "raid_level": "concat", 00:07:27.280 "base_bdevs": [ 00:07:27.280 "malloc1", 00:07:27.280 "malloc2" 00:07:27.280 ], 00:07:27.281 "strip_size_kb": 64, 00:07:27.281 "superblock": false, 00:07:27.281 "method": "bdev_raid_create", 00:07:27.281 "req_id": 1 00:07:27.281 } 00:07:27.281 Got JSON-RPC error response 00:07:27.281 response: 00:07:27.281 { 00:07:27.281 "code": -17, 00:07:27.281 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:27.281 } 00:07:27.281 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:27.281 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:27.281 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:27.281 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:27.281 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:27.281 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.281 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.281 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.281 01:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:27.281 01:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.540 [2024-10-15 01:08:40.012394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:27.540 [2024-10-15 01:08:40.012438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.540 [2024-10-15 01:08:40.012456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:27.540 [2024-10-15 01:08:40.012464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.540 [2024-10-15 01:08:40.014655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.540 [2024-10-15 01:08:40.014685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:27.540 [2024-10-15 01:08:40.014748] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:27.540 [2024-10-15 01:08:40.014778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:27.540 pt1 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.540 "name": "raid_bdev1", 00:07:27.540 "uuid": "05119a90-8417-4f5b-bcc6-40e02262b174", 00:07:27.540 "strip_size_kb": 64, 00:07:27.540 "state": "configuring", 00:07:27.540 "raid_level": "concat", 00:07:27.540 "superblock": true, 00:07:27.540 "num_base_bdevs": 2, 00:07:27.540 "num_base_bdevs_discovered": 1, 00:07:27.540 "num_base_bdevs_operational": 2, 00:07:27.540 "base_bdevs_list": [ 00:07:27.540 { 00:07:27.540 "name": "pt1", 00:07:27.540 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:27.540 "is_configured": true, 00:07:27.540 "data_offset": 2048, 00:07:27.540 "data_size": 63488 00:07:27.540 }, 00:07:27.540 { 00:07:27.540 "name": null, 00:07:27.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:27.540 "is_configured": false, 00:07:27.540 "data_offset": 2048, 00:07:27.540 "data_size": 63488 00:07:27.540 } 00:07:27.540 ] 00:07:27.540 }' 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.540 01:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.800 [2024-10-15 01:08:40.483595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:27.800 [2024-10-15 01:08:40.483652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.800 [2024-10-15 01:08:40.483676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:27.800 [2024-10-15 01:08:40.483685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.800 [2024-10-15 01:08:40.484074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.800 [2024-10-15 01:08:40.484100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:27.800 [2024-10-15 01:08:40.484169] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:27.800 [2024-10-15 01:08:40.484204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:27.800 [2024-10-15 01:08:40.484300] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:27.800 [2024-10-15 01:08:40.484315] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:27.800 [2024-10-15 01:08:40.484553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:27.800 [2024-10-15 01:08:40.484673] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:27.800 [2024-10-15 01:08:40.484692] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:27.800 [2024-10-15 01:08:40.484804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.800 pt2 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.800 01:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.060 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.060 "name": "raid_bdev1", 00:07:28.060 "uuid": "05119a90-8417-4f5b-bcc6-40e02262b174", 00:07:28.060 "strip_size_kb": 64, 00:07:28.061 "state": "online", 00:07:28.061 "raid_level": "concat", 00:07:28.061 "superblock": true, 00:07:28.061 "num_base_bdevs": 2, 00:07:28.061 "num_base_bdevs_discovered": 2, 00:07:28.061 "num_base_bdevs_operational": 2, 00:07:28.061 "base_bdevs_list": [ 00:07:28.061 { 00:07:28.061 "name": "pt1", 00:07:28.061 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:28.061 "is_configured": true, 00:07:28.061 "data_offset": 2048, 00:07:28.061 "data_size": 63488 00:07:28.061 }, 00:07:28.061 { 00:07:28.061 "name": "pt2", 00:07:28.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:28.061 "is_configured": true, 00:07:28.061 "data_offset": 2048, 00:07:28.061 "data_size": 63488 00:07:28.061 } 00:07:28.061 ] 00:07:28.061 }' 00:07:28.061 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.061 01:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.318 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:28.318 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:28.318 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:28.318 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:28.318 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:28.318 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:28.318 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:28.318 01:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:28.318 01:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.318 01:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.318 [2024-10-15 01:08:40.967067] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.318 01:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.318 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:28.318 "name": "raid_bdev1", 00:07:28.318 "aliases": [ 00:07:28.318 "05119a90-8417-4f5b-bcc6-40e02262b174" 00:07:28.318 ], 00:07:28.318 "product_name": "Raid Volume", 00:07:28.318 "block_size": 512, 00:07:28.318 "num_blocks": 126976, 00:07:28.318 "uuid": "05119a90-8417-4f5b-bcc6-40e02262b174", 00:07:28.318 "assigned_rate_limits": { 00:07:28.318 "rw_ios_per_sec": 0, 00:07:28.318 "rw_mbytes_per_sec": 0, 00:07:28.318 "r_mbytes_per_sec": 0, 00:07:28.318 "w_mbytes_per_sec": 0 00:07:28.318 }, 00:07:28.318 "claimed": false, 00:07:28.318 "zoned": false, 00:07:28.318 "supported_io_types": { 00:07:28.318 "read": true, 00:07:28.318 "write": true, 00:07:28.318 "unmap": true, 00:07:28.318 "flush": true, 00:07:28.318 "reset": true, 00:07:28.318 "nvme_admin": false, 00:07:28.318 "nvme_io": false, 00:07:28.318 "nvme_io_md": false, 00:07:28.318 "write_zeroes": true, 00:07:28.318 "zcopy": false, 00:07:28.318 "get_zone_info": false, 00:07:28.318 "zone_management": false, 00:07:28.318 "zone_append": false, 00:07:28.318 "compare": false, 00:07:28.318 "compare_and_write": false, 00:07:28.318 "abort": false, 00:07:28.318 "seek_hole": false, 00:07:28.318 "seek_data": false, 00:07:28.318 "copy": false, 00:07:28.318 "nvme_iov_md": false 00:07:28.318 }, 00:07:28.318 "memory_domains": [ 00:07:28.318 { 00:07:28.318 "dma_device_id": "system", 00:07:28.318 "dma_device_type": 1 00:07:28.318 }, 00:07:28.318 { 00:07:28.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.318 "dma_device_type": 2 00:07:28.318 }, 00:07:28.318 { 00:07:28.318 "dma_device_id": "system", 00:07:28.318 "dma_device_type": 1 00:07:28.318 }, 00:07:28.318 { 00:07:28.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.318 "dma_device_type": 2 00:07:28.318 } 00:07:28.318 ], 00:07:28.318 "driver_specific": { 00:07:28.318 "raid": { 00:07:28.318 "uuid": "05119a90-8417-4f5b-bcc6-40e02262b174", 00:07:28.318 "strip_size_kb": 64, 00:07:28.318 "state": "online", 00:07:28.318 "raid_level": "concat", 00:07:28.318 "superblock": true, 00:07:28.318 "num_base_bdevs": 2, 00:07:28.318 "num_base_bdevs_discovered": 2, 00:07:28.318 "num_base_bdevs_operational": 2, 00:07:28.318 "base_bdevs_list": [ 00:07:28.318 { 00:07:28.318 "name": "pt1", 00:07:28.318 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:28.318 "is_configured": true, 00:07:28.318 "data_offset": 2048, 00:07:28.318 "data_size": 63488 00:07:28.318 }, 00:07:28.318 { 00:07:28.318 "name": "pt2", 00:07:28.318 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:28.318 "is_configured": true, 00:07:28.318 "data_offset": 2048, 00:07:28.318 "data_size": 63488 00:07:28.318 } 00:07:28.318 ] 00:07:28.318 } 00:07:28.318 } 00:07:28.318 }' 00:07:28.319 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:28.586 pt2' 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.586 [2024-10-15 01:08:41.182694] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 05119a90-8417-4f5b-bcc6-40e02262b174 '!=' 05119a90-8417-4f5b-bcc6-40e02262b174 ']' 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73282 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73282 ']' 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73282 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73282 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.586 killing process with pid 73282 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73282' 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73282 00:07:28.586 [2024-10-15 01:08:41.246869] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.586 [2024-10-15 01:08:41.246954] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.586 [2024-10-15 01:08:41.247004] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:28.586 [2024-10-15 01:08:41.247012] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:28.586 01:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73282 00:07:28.586 [2024-10-15 01:08:41.269978] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:28.869 01:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:28.869 00:07:28.869 real 0m3.333s 00:07:28.869 user 0m5.229s 00:07:28.869 sys 0m0.644s 00:07:28.869 01:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.869 01:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.869 ************************************ 00:07:28.869 END TEST raid_superblock_test 00:07:28.869 ************************************ 00:07:28.869 01:08:41 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:28.869 01:08:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:28.869 01:08:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.869 01:08:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:28.869 ************************************ 00:07:28.869 START TEST raid_read_error_test 00:07:28.869 ************************************ 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Pu7erHQaCj 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73483 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73483 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73483 ']' 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.869 01:08:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.129 [2024-10-15 01:08:41.649926] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:07:29.129 [2024-10-15 01:08:41.650044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73483 ] 00:07:29.129 [2024-10-15 01:08:41.791266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.129 [2024-10-15 01:08:41.818319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.389 [2024-10-15 01:08:41.861153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.389 [2024-10-15 01:08:41.861185] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.959 BaseBdev1_malloc 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.959 true 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.959 [2024-10-15 01:08:42.499435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:29.959 [2024-10-15 01:08:42.499490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.959 [2024-10-15 01:08:42.499511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:29.959 [2024-10-15 01:08:42.499519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.959 [2024-10-15 01:08:42.501552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.959 [2024-10-15 01:08:42.501589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:29.959 BaseBdev1 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.959 BaseBdev2_malloc 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.959 true 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.959 [2024-10-15 01:08:42.539908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:29.959 [2024-10-15 01:08:42.539954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.959 [2024-10-15 01:08:42.539971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:29.959 [2024-10-15 01:08:42.539987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.959 [2024-10-15 01:08:42.541979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.959 [2024-10-15 01:08:42.542014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:29.959 BaseBdev2 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.959 [2024-10-15 01:08:42.551973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.959 [2024-10-15 01:08:42.553779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:29.959 [2024-10-15 01:08:42.553945] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:29.959 [2024-10-15 01:08:42.553967] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:29.959 [2024-10-15 01:08:42.554212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:29.959 [2024-10-15 01:08:42.554325] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:29.959 [2024-10-15 01:08:42.554337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:29.959 [2024-10-15 01:08:42.554461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.959 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.959 "name": "raid_bdev1", 00:07:29.959 "uuid": "76063950-2d6e-4ba1-b07d-2ff7b3464429", 00:07:29.959 "strip_size_kb": 64, 00:07:29.959 "state": "online", 00:07:29.959 "raid_level": "concat", 00:07:29.959 "superblock": true, 00:07:29.959 "num_base_bdevs": 2, 00:07:29.959 "num_base_bdevs_discovered": 2, 00:07:29.959 "num_base_bdevs_operational": 2, 00:07:29.959 "base_bdevs_list": [ 00:07:29.959 { 00:07:29.959 "name": "BaseBdev1", 00:07:29.959 "uuid": "aafd5a76-0575-5ee5-bcac-24be73bf7a9d", 00:07:29.959 "is_configured": true, 00:07:29.959 "data_offset": 2048, 00:07:29.959 "data_size": 63488 00:07:29.959 }, 00:07:29.959 { 00:07:29.959 "name": "BaseBdev2", 00:07:29.959 "uuid": "50d7de0c-aa4e-5d9b-90be-1a13742fd4fa", 00:07:29.959 "is_configured": true, 00:07:29.959 "data_offset": 2048, 00:07:29.959 "data_size": 63488 00:07:29.959 } 00:07:29.959 ] 00:07:29.959 }' 00:07:29.960 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.960 01:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.528 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:30.528 01:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:30.528 [2024-10-15 01:08:43.047540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:31.468 01:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:31.468 01:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.468 01:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.468 01:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.468 01:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:31.468 01:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:31.468 01:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:31.468 01:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:31.468 01:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.468 01:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.468 01:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:31.468 01:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.468 01:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.468 01:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.468 01:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.468 01:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.468 01:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.468 01:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.468 01:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.468 01:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.468 01:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.468 01:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.468 01:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.468 "name": "raid_bdev1", 00:07:31.468 "uuid": "76063950-2d6e-4ba1-b07d-2ff7b3464429", 00:07:31.468 "strip_size_kb": 64, 00:07:31.469 "state": "online", 00:07:31.469 "raid_level": "concat", 00:07:31.469 "superblock": true, 00:07:31.469 "num_base_bdevs": 2, 00:07:31.469 "num_base_bdevs_discovered": 2, 00:07:31.469 "num_base_bdevs_operational": 2, 00:07:31.469 "base_bdevs_list": [ 00:07:31.469 { 00:07:31.469 "name": "BaseBdev1", 00:07:31.469 "uuid": "aafd5a76-0575-5ee5-bcac-24be73bf7a9d", 00:07:31.469 "is_configured": true, 00:07:31.469 "data_offset": 2048, 00:07:31.469 "data_size": 63488 00:07:31.469 }, 00:07:31.469 { 00:07:31.469 "name": "BaseBdev2", 00:07:31.469 "uuid": "50d7de0c-aa4e-5d9b-90be-1a13742fd4fa", 00:07:31.469 "is_configured": true, 00:07:31.469 "data_offset": 2048, 00:07:31.469 "data_size": 63488 00:07:31.469 } 00:07:31.469 ] 00:07:31.469 }' 00:07:31.469 01:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.469 01:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.729 01:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:31.729 01:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.729 01:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.729 [2024-10-15 01:08:44.403144] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:31.729 [2024-10-15 01:08:44.403193] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:31.729 [2024-10-15 01:08:44.405603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.729 [2024-10-15 01:08:44.405652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.729 [2024-10-15 01:08:44.405685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:31.729 [2024-10-15 01:08:44.405694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:31.729 { 00:07:31.729 "results": [ 00:07:31.729 { 00:07:31.729 "job": "raid_bdev1", 00:07:31.729 "core_mask": "0x1", 00:07:31.729 "workload": "randrw", 00:07:31.729 "percentage": 50, 00:07:31.729 "status": "finished", 00:07:31.729 "queue_depth": 1, 00:07:31.729 "io_size": 131072, 00:07:31.729 "runtime": 1.356456, 00:07:31.729 "iops": 17789.740323313104, 00:07:31.729 "mibps": 2223.717540414138, 00:07:31.729 "io_failed": 1, 00:07:31.729 "io_timeout": 0, 00:07:31.729 "avg_latency_us": 77.72043165790483, 00:07:31.729 "min_latency_us": 24.370305676855896, 00:07:31.729 "max_latency_us": 1359.3711790393013 00:07:31.729 } 00:07:31.729 ], 00:07:31.729 "core_count": 1 00:07:31.729 } 00:07:31.729 01:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.729 01:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73483 00:07:31.729 01:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73483 ']' 00:07:31.729 01:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73483 00:07:31.729 01:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:31.729 01:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.729 01:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73483 00:07:31.729 killing process with pid 73483 00:07:31.729 01:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:31.729 01:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:31.729 01:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73483' 00:07:31.729 01:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73483 00:07:31.729 [2024-10-15 01:08:44.447526] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:31.729 01:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73483 00:07:31.990 [2024-10-15 01:08:44.463355] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:31.990 01:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Pu7erHQaCj 00:07:31.990 01:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:31.990 01:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:31.990 01:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:31.990 01:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:31.990 01:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:31.990 01:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:31.990 ************************************ 00:07:31.990 END TEST raid_read_error_test 00:07:31.990 ************************************ 00:07:31.990 01:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:31.990 00:07:31.990 real 0m3.121s 00:07:31.990 user 0m3.968s 00:07:31.990 sys 0m0.479s 00:07:31.990 01:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.990 01:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.250 01:08:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:32.250 01:08:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:32.250 01:08:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.250 01:08:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.250 ************************************ 00:07:32.250 START TEST raid_write_error_test 00:07:32.250 ************************************ 00:07:32.250 01:08:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:07:32.250 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:32.250 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:32.250 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:32.250 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:32.250 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.250 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:32.250 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:32.250 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QmELM2XE6h 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73612 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73612 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73612 ']' 00:07:32.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.251 01:08:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.251 [2024-10-15 01:08:44.845072] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:07:32.251 [2024-10-15 01:08:44.845217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73612 ] 00:07:32.251 [2024-10-15 01:08:44.970075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.510 [2024-10-15 01:08:44.997357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.510 [2024-10-15 01:08:45.039843] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.510 [2024-10-15 01:08:45.039959] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.081 BaseBdev1_malloc 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.081 true 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.081 [2024-10-15 01:08:45.686139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:33.081 [2024-10-15 01:08:45.686225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.081 [2024-10-15 01:08:45.686246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:33.081 [2024-10-15 01:08:45.686254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.081 [2024-10-15 01:08:45.688372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.081 [2024-10-15 01:08:45.688407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:33.081 BaseBdev1 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.081 BaseBdev2_malloc 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.081 true 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.081 [2024-10-15 01:08:45.726538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:33.081 [2024-10-15 01:08:45.726584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.081 [2024-10-15 01:08:45.726616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:33.081 [2024-10-15 01:08:45.726632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.081 [2024-10-15 01:08:45.728658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.081 [2024-10-15 01:08:45.728692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:33.081 BaseBdev2 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.081 [2024-10-15 01:08:45.738595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.081 [2024-10-15 01:08:45.740471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:33.081 [2024-10-15 01:08:45.740654] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:33.081 [2024-10-15 01:08:45.740666] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:33.081 [2024-10-15 01:08:45.740902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:33.081 [2024-10-15 01:08:45.741014] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:33.081 [2024-10-15 01:08:45.741025] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:33.081 [2024-10-15 01:08:45.741143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.081 "name": "raid_bdev1", 00:07:33.081 "uuid": "9cffe687-0358-4e6b-bb84-20ac3eb5ba4d", 00:07:33.081 "strip_size_kb": 64, 00:07:33.081 "state": "online", 00:07:33.081 "raid_level": "concat", 00:07:33.081 "superblock": true, 00:07:33.081 "num_base_bdevs": 2, 00:07:33.081 "num_base_bdevs_discovered": 2, 00:07:33.081 "num_base_bdevs_operational": 2, 00:07:33.081 "base_bdevs_list": [ 00:07:33.081 { 00:07:33.081 "name": "BaseBdev1", 00:07:33.081 "uuid": "daa01f75-cd70-532c-b3ef-9a5eb87eb994", 00:07:33.081 "is_configured": true, 00:07:33.081 "data_offset": 2048, 00:07:33.081 "data_size": 63488 00:07:33.081 }, 00:07:33.081 { 00:07:33.081 "name": "BaseBdev2", 00:07:33.081 "uuid": "78479a25-6854-51a2-b27a-9cd6893076e3", 00:07:33.081 "is_configured": true, 00:07:33.081 "data_offset": 2048, 00:07:33.081 "data_size": 63488 00:07:33.081 } 00:07:33.081 ] 00:07:33.081 }' 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.081 01:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.651 01:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:33.651 01:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:33.651 [2024-10-15 01:08:46.266090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.590 "name": "raid_bdev1", 00:07:34.590 "uuid": "9cffe687-0358-4e6b-bb84-20ac3eb5ba4d", 00:07:34.590 "strip_size_kb": 64, 00:07:34.590 "state": "online", 00:07:34.590 "raid_level": "concat", 00:07:34.590 "superblock": true, 00:07:34.590 "num_base_bdevs": 2, 00:07:34.590 "num_base_bdevs_discovered": 2, 00:07:34.590 "num_base_bdevs_operational": 2, 00:07:34.590 "base_bdevs_list": [ 00:07:34.590 { 00:07:34.590 "name": "BaseBdev1", 00:07:34.590 "uuid": "daa01f75-cd70-532c-b3ef-9a5eb87eb994", 00:07:34.590 "is_configured": true, 00:07:34.590 "data_offset": 2048, 00:07:34.590 "data_size": 63488 00:07:34.590 }, 00:07:34.590 { 00:07:34.590 "name": "BaseBdev2", 00:07:34.590 "uuid": "78479a25-6854-51a2-b27a-9cd6893076e3", 00:07:34.590 "is_configured": true, 00:07:34.590 "data_offset": 2048, 00:07:34.590 "data_size": 63488 00:07:34.590 } 00:07:34.590 ] 00:07:34.590 }' 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.590 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.159 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:35.159 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.159 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.159 [2024-10-15 01:08:47.682100] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:35.159 [2024-10-15 01:08:47.682207] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:35.159 [2024-10-15 01:08:47.684933] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.159 [2024-10-15 01:08:47.685009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.159 [2024-10-15 01:08:47.685061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.159 [2024-10-15 01:08:47.685112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:35.159 { 00:07:35.159 "results": [ 00:07:35.159 { 00:07:35.159 "job": "raid_bdev1", 00:07:35.159 "core_mask": "0x1", 00:07:35.159 "workload": "randrw", 00:07:35.159 "percentage": 50, 00:07:35.159 "status": "finished", 00:07:35.159 "queue_depth": 1, 00:07:35.159 "io_size": 131072, 00:07:35.159 "runtime": 1.417074, 00:07:35.159 "iops": 17617.287452878256, 00:07:35.159 "mibps": 2202.160931609782, 00:07:35.159 "io_failed": 1, 00:07:35.159 "io_timeout": 0, 00:07:35.159 "avg_latency_us": 78.35612352449988, 00:07:35.159 "min_latency_us": 24.482096069868994, 00:07:35.159 "max_latency_us": 1430.9170305676855 00:07:35.159 } 00:07:35.159 ], 00:07:35.159 "core_count": 1 00:07:35.159 } 00:07:35.159 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.159 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73612 00:07:35.159 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73612 ']' 00:07:35.159 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73612 00:07:35.159 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:35.159 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:35.159 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73612 00:07:35.159 killing process with pid 73612 00:07:35.159 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:35.159 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:35.159 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73612' 00:07:35.159 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73612 00:07:35.159 [2024-10-15 01:08:47.721049] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.159 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73612 00:07:35.159 [2024-10-15 01:08:47.736139] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.424 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QmELM2XE6h 00:07:35.425 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:35.425 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:35.425 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:35.425 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:35.425 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:35.425 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:35.425 01:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:35.425 ************************************ 00:07:35.425 END TEST raid_write_error_test 00:07:35.425 ************************************ 00:07:35.425 00:07:35.425 real 0m3.199s 00:07:35.425 user 0m4.131s 00:07:35.425 sys 0m0.468s 00:07:35.425 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.425 01:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.425 01:08:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:35.425 01:08:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:35.425 01:08:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:35.425 01:08:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.425 01:08:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.425 ************************************ 00:07:35.425 START TEST raid_state_function_test 00:07:35.425 ************************************ 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73739 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73739' 00:07:35.425 Process raid pid: 73739 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73739 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73739 ']' 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.425 01:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.425 [2024-10-15 01:08:48.109415] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:07:35.425 [2024-10-15 01:08:48.109551] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.685 [2024-10-15 01:08:48.235974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.686 [2024-10-15 01:08:48.260846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.686 [2024-10-15 01:08:48.302985] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.686 [2024-10-15 01:08:48.303018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.255 01:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.255 01:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:36.255 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.255 01:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.255 01:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.255 [2024-10-15 01:08:48.932337] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.255 [2024-10-15 01:08:48.932399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.255 [2024-10-15 01:08:48.932411] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.255 [2024-10-15 01:08:48.932421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.255 01:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.255 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:36.255 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.255 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.255 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:36.255 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:36.255 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.255 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.255 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.255 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.255 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.255 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.255 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.255 01:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.255 01:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.255 01:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.515 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.515 "name": "Existed_Raid", 00:07:36.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.515 "strip_size_kb": 0, 00:07:36.515 "state": "configuring", 00:07:36.515 "raid_level": "raid1", 00:07:36.515 "superblock": false, 00:07:36.515 "num_base_bdevs": 2, 00:07:36.515 "num_base_bdevs_discovered": 0, 00:07:36.515 "num_base_bdevs_operational": 2, 00:07:36.515 "base_bdevs_list": [ 00:07:36.515 { 00:07:36.515 "name": "BaseBdev1", 00:07:36.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.515 "is_configured": false, 00:07:36.515 "data_offset": 0, 00:07:36.515 "data_size": 0 00:07:36.515 }, 00:07:36.515 { 00:07:36.515 "name": "BaseBdev2", 00:07:36.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.515 "is_configured": false, 00:07:36.515 "data_offset": 0, 00:07:36.515 "data_size": 0 00:07:36.515 } 00:07:36.515 ] 00:07:36.515 }' 00:07:36.515 01:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.515 01:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.774 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:36.774 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.774 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.774 [2024-10-15 01:08:49.399441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:36.774 [2024-10-15 01:08:49.399530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:36.774 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.775 [2024-10-15 01:08:49.407430] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.775 [2024-10-15 01:08:49.407509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.775 [2024-10-15 01:08:49.407535] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.775 [2024-10-15 01:08:49.407569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.775 [2024-10-15 01:08:49.424363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.775 BaseBdev1 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.775 [ 00:07:36.775 { 00:07:36.775 "name": "BaseBdev1", 00:07:36.775 "aliases": [ 00:07:36.775 "9b2e9969-bf59-4140-ae4c-38dda976defc" 00:07:36.775 ], 00:07:36.775 "product_name": "Malloc disk", 00:07:36.775 "block_size": 512, 00:07:36.775 "num_blocks": 65536, 00:07:36.775 "uuid": "9b2e9969-bf59-4140-ae4c-38dda976defc", 00:07:36.775 "assigned_rate_limits": { 00:07:36.775 "rw_ios_per_sec": 0, 00:07:36.775 "rw_mbytes_per_sec": 0, 00:07:36.775 "r_mbytes_per_sec": 0, 00:07:36.775 "w_mbytes_per_sec": 0 00:07:36.775 }, 00:07:36.775 "claimed": true, 00:07:36.775 "claim_type": "exclusive_write", 00:07:36.775 "zoned": false, 00:07:36.775 "supported_io_types": { 00:07:36.775 "read": true, 00:07:36.775 "write": true, 00:07:36.775 "unmap": true, 00:07:36.775 "flush": true, 00:07:36.775 "reset": true, 00:07:36.775 "nvme_admin": false, 00:07:36.775 "nvme_io": false, 00:07:36.775 "nvme_io_md": false, 00:07:36.775 "write_zeroes": true, 00:07:36.775 "zcopy": true, 00:07:36.775 "get_zone_info": false, 00:07:36.775 "zone_management": false, 00:07:36.775 "zone_append": false, 00:07:36.775 "compare": false, 00:07:36.775 "compare_and_write": false, 00:07:36.775 "abort": true, 00:07:36.775 "seek_hole": false, 00:07:36.775 "seek_data": false, 00:07:36.775 "copy": true, 00:07:36.775 "nvme_iov_md": false 00:07:36.775 }, 00:07:36.775 "memory_domains": [ 00:07:36.775 { 00:07:36.775 "dma_device_id": "system", 00:07:36.775 "dma_device_type": 1 00:07:36.775 }, 00:07:36.775 { 00:07:36.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.775 "dma_device_type": 2 00:07:36.775 } 00:07:36.775 ], 00:07:36.775 "driver_specific": {} 00:07:36.775 } 00:07:36.775 ] 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.775 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.034 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.034 "name": "Existed_Raid", 00:07:37.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.034 "strip_size_kb": 0, 00:07:37.034 "state": "configuring", 00:07:37.034 "raid_level": "raid1", 00:07:37.034 "superblock": false, 00:07:37.034 "num_base_bdevs": 2, 00:07:37.034 "num_base_bdevs_discovered": 1, 00:07:37.034 "num_base_bdevs_operational": 2, 00:07:37.034 "base_bdevs_list": [ 00:07:37.034 { 00:07:37.034 "name": "BaseBdev1", 00:07:37.034 "uuid": "9b2e9969-bf59-4140-ae4c-38dda976defc", 00:07:37.034 "is_configured": true, 00:07:37.034 "data_offset": 0, 00:07:37.034 "data_size": 65536 00:07:37.034 }, 00:07:37.035 { 00:07:37.035 "name": "BaseBdev2", 00:07:37.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.035 "is_configured": false, 00:07:37.035 "data_offset": 0, 00:07:37.035 "data_size": 0 00:07:37.035 } 00:07:37.035 ] 00:07:37.035 }' 00:07:37.035 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.035 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.295 [2024-10-15 01:08:49.935525] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.295 [2024-10-15 01:08:49.935570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.295 [2024-10-15 01:08:49.943560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.295 [2024-10-15 01:08:49.945386] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.295 [2024-10-15 01:08:49.945418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.295 01:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.295 "name": "Existed_Raid", 00:07:37.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.295 "strip_size_kb": 0, 00:07:37.295 "state": "configuring", 00:07:37.295 "raid_level": "raid1", 00:07:37.295 "superblock": false, 00:07:37.295 "num_base_bdevs": 2, 00:07:37.295 "num_base_bdevs_discovered": 1, 00:07:37.295 "num_base_bdevs_operational": 2, 00:07:37.295 "base_bdevs_list": [ 00:07:37.295 { 00:07:37.295 "name": "BaseBdev1", 00:07:37.295 "uuid": "9b2e9969-bf59-4140-ae4c-38dda976defc", 00:07:37.295 "is_configured": true, 00:07:37.295 "data_offset": 0, 00:07:37.295 "data_size": 65536 00:07:37.295 }, 00:07:37.295 { 00:07:37.295 "name": "BaseBdev2", 00:07:37.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.295 "is_configured": false, 00:07:37.295 "data_offset": 0, 00:07:37.295 "data_size": 0 00:07:37.295 } 00:07:37.295 ] 00:07:37.295 }' 00:07:37.295 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.295 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.863 [2024-10-15 01:08:50.373865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.863 [2024-10-15 01:08:50.373981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:37.863 [2024-10-15 01:08:50.374005] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:37.863 [2024-10-15 01:08:50.374315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:37.863 [2024-10-15 01:08:50.374496] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:37.863 [2024-10-15 01:08:50.374547] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:37.863 [2024-10-15 01:08:50.374758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.863 BaseBdev2 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.863 [ 00:07:37.863 { 00:07:37.863 "name": "BaseBdev2", 00:07:37.863 "aliases": [ 00:07:37.863 "10da8473-4b2a-4e4b-aa0b-9b9ad10e6476" 00:07:37.863 ], 00:07:37.863 "product_name": "Malloc disk", 00:07:37.863 "block_size": 512, 00:07:37.863 "num_blocks": 65536, 00:07:37.863 "uuid": "10da8473-4b2a-4e4b-aa0b-9b9ad10e6476", 00:07:37.863 "assigned_rate_limits": { 00:07:37.863 "rw_ios_per_sec": 0, 00:07:37.863 "rw_mbytes_per_sec": 0, 00:07:37.863 "r_mbytes_per_sec": 0, 00:07:37.863 "w_mbytes_per_sec": 0 00:07:37.863 }, 00:07:37.863 "claimed": true, 00:07:37.863 "claim_type": "exclusive_write", 00:07:37.863 "zoned": false, 00:07:37.863 "supported_io_types": { 00:07:37.863 "read": true, 00:07:37.863 "write": true, 00:07:37.863 "unmap": true, 00:07:37.863 "flush": true, 00:07:37.863 "reset": true, 00:07:37.863 "nvme_admin": false, 00:07:37.863 "nvme_io": false, 00:07:37.863 "nvme_io_md": false, 00:07:37.863 "write_zeroes": true, 00:07:37.863 "zcopy": true, 00:07:37.863 "get_zone_info": false, 00:07:37.863 "zone_management": false, 00:07:37.863 "zone_append": false, 00:07:37.863 "compare": false, 00:07:37.863 "compare_and_write": false, 00:07:37.863 "abort": true, 00:07:37.863 "seek_hole": false, 00:07:37.863 "seek_data": false, 00:07:37.863 "copy": true, 00:07:37.863 "nvme_iov_md": false 00:07:37.863 }, 00:07:37.863 "memory_domains": [ 00:07:37.863 { 00:07:37.863 "dma_device_id": "system", 00:07:37.863 "dma_device_type": 1 00:07:37.863 }, 00:07:37.863 { 00:07:37.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.863 "dma_device_type": 2 00:07:37.863 } 00:07:37.863 ], 00:07:37.863 "driver_specific": {} 00:07:37.863 } 00:07:37.863 ] 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.863 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.863 "name": "Existed_Raid", 00:07:37.863 "uuid": "0550fc1e-c32e-440c-9750-4df93c765ea7", 00:07:37.864 "strip_size_kb": 0, 00:07:37.864 "state": "online", 00:07:37.864 "raid_level": "raid1", 00:07:37.864 "superblock": false, 00:07:37.864 "num_base_bdevs": 2, 00:07:37.864 "num_base_bdevs_discovered": 2, 00:07:37.864 "num_base_bdevs_operational": 2, 00:07:37.864 "base_bdevs_list": [ 00:07:37.864 { 00:07:37.864 "name": "BaseBdev1", 00:07:37.864 "uuid": "9b2e9969-bf59-4140-ae4c-38dda976defc", 00:07:37.864 "is_configured": true, 00:07:37.864 "data_offset": 0, 00:07:37.864 "data_size": 65536 00:07:37.864 }, 00:07:37.864 { 00:07:37.864 "name": "BaseBdev2", 00:07:37.864 "uuid": "10da8473-4b2a-4e4b-aa0b-9b9ad10e6476", 00:07:37.864 "is_configured": true, 00:07:37.864 "data_offset": 0, 00:07:37.864 "data_size": 65536 00:07:37.864 } 00:07:37.864 ] 00:07:37.864 }' 00:07:37.864 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.864 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.123 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:38.123 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:38.123 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:38.123 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:38.123 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:38.123 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:38.123 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:38.123 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:38.123 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.123 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.123 [2024-10-15 01:08:50.833364] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.382 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.382 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:38.382 "name": "Existed_Raid", 00:07:38.382 "aliases": [ 00:07:38.382 "0550fc1e-c32e-440c-9750-4df93c765ea7" 00:07:38.382 ], 00:07:38.382 "product_name": "Raid Volume", 00:07:38.382 "block_size": 512, 00:07:38.382 "num_blocks": 65536, 00:07:38.382 "uuid": "0550fc1e-c32e-440c-9750-4df93c765ea7", 00:07:38.382 "assigned_rate_limits": { 00:07:38.382 "rw_ios_per_sec": 0, 00:07:38.382 "rw_mbytes_per_sec": 0, 00:07:38.382 "r_mbytes_per_sec": 0, 00:07:38.382 "w_mbytes_per_sec": 0 00:07:38.382 }, 00:07:38.382 "claimed": false, 00:07:38.382 "zoned": false, 00:07:38.382 "supported_io_types": { 00:07:38.382 "read": true, 00:07:38.382 "write": true, 00:07:38.382 "unmap": false, 00:07:38.382 "flush": false, 00:07:38.382 "reset": true, 00:07:38.382 "nvme_admin": false, 00:07:38.382 "nvme_io": false, 00:07:38.382 "nvme_io_md": false, 00:07:38.382 "write_zeroes": true, 00:07:38.382 "zcopy": false, 00:07:38.382 "get_zone_info": false, 00:07:38.382 "zone_management": false, 00:07:38.382 "zone_append": false, 00:07:38.382 "compare": false, 00:07:38.382 "compare_and_write": false, 00:07:38.382 "abort": false, 00:07:38.382 "seek_hole": false, 00:07:38.382 "seek_data": false, 00:07:38.382 "copy": false, 00:07:38.382 "nvme_iov_md": false 00:07:38.382 }, 00:07:38.382 "memory_domains": [ 00:07:38.382 { 00:07:38.382 "dma_device_id": "system", 00:07:38.382 "dma_device_type": 1 00:07:38.382 }, 00:07:38.382 { 00:07:38.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.382 "dma_device_type": 2 00:07:38.382 }, 00:07:38.382 { 00:07:38.382 "dma_device_id": "system", 00:07:38.382 "dma_device_type": 1 00:07:38.382 }, 00:07:38.382 { 00:07:38.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.382 "dma_device_type": 2 00:07:38.382 } 00:07:38.382 ], 00:07:38.382 "driver_specific": { 00:07:38.382 "raid": { 00:07:38.382 "uuid": "0550fc1e-c32e-440c-9750-4df93c765ea7", 00:07:38.382 "strip_size_kb": 0, 00:07:38.382 "state": "online", 00:07:38.382 "raid_level": "raid1", 00:07:38.382 "superblock": false, 00:07:38.382 "num_base_bdevs": 2, 00:07:38.382 "num_base_bdevs_discovered": 2, 00:07:38.382 "num_base_bdevs_operational": 2, 00:07:38.382 "base_bdevs_list": [ 00:07:38.382 { 00:07:38.382 "name": "BaseBdev1", 00:07:38.382 "uuid": "9b2e9969-bf59-4140-ae4c-38dda976defc", 00:07:38.382 "is_configured": true, 00:07:38.382 "data_offset": 0, 00:07:38.382 "data_size": 65536 00:07:38.382 }, 00:07:38.382 { 00:07:38.382 "name": "BaseBdev2", 00:07:38.382 "uuid": "10da8473-4b2a-4e4b-aa0b-9b9ad10e6476", 00:07:38.382 "is_configured": true, 00:07:38.382 "data_offset": 0, 00:07:38.382 "data_size": 65536 00:07:38.382 } 00:07:38.382 ] 00:07:38.382 } 00:07:38.382 } 00:07:38.382 }' 00:07:38.382 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:38.382 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:38.382 BaseBdev2' 00:07:38.382 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.382 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:38.382 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.382 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:38.382 01:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.382 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.382 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.382 01:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.382 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.382 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.382 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.382 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.383 [2024-10-15 01:08:51.072746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.383 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.642 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.642 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.642 "name": "Existed_Raid", 00:07:38.642 "uuid": "0550fc1e-c32e-440c-9750-4df93c765ea7", 00:07:38.642 "strip_size_kb": 0, 00:07:38.642 "state": "online", 00:07:38.642 "raid_level": "raid1", 00:07:38.642 "superblock": false, 00:07:38.642 "num_base_bdevs": 2, 00:07:38.642 "num_base_bdevs_discovered": 1, 00:07:38.642 "num_base_bdevs_operational": 1, 00:07:38.642 "base_bdevs_list": [ 00:07:38.642 { 00:07:38.642 "name": null, 00:07:38.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.642 "is_configured": false, 00:07:38.642 "data_offset": 0, 00:07:38.642 "data_size": 65536 00:07:38.642 }, 00:07:38.642 { 00:07:38.642 "name": "BaseBdev2", 00:07:38.642 "uuid": "10da8473-4b2a-4e4b-aa0b-9b9ad10e6476", 00:07:38.642 "is_configured": true, 00:07:38.642 "data_offset": 0, 00:07:38.642 "data_size": 65536 00:07:38.642 } 00:07:38.642 ] 00:07:38.642 }' 00:07:38.642 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.642 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.901 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:38.901 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:38.901 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.901 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:38.901 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.901 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.901 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.901 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:38.901 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:38.901 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:38.901 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.901 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.901 [2024-10-15 01:08:51.579050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:38.901 [2024-10-15 01:08:51.579218] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:38.901 [2024-10-15 01:08:51.590722] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.901 [2024-10-15 01:08:51.590821] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:38.901 [2024-10-15 01:08:51.590862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:38.901 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.901 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:38.901 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:38.901 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.901 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:38.901 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.901 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.901 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.169 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:39.169 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:39.169 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:39.169 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73739 00:07:39.169 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73739 ']' 00:07:39.169 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73739 00:07:39.169 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:39.169 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.169 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73739 00:07:39.169 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:39.169 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:39.169 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73739' 00:07:39.169 killing process with pid 73739 00:07:39.169 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73739 00:07:39.169 [2024-10-15 01:08:51.687328] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.169 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73739 00:07:39.169 [2024-10-15 01:08:51.688354] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:39.445 00:07:39.445 real 0m3.882s 00:07:39.445 user 0m6.219s 00:07:39.445 sys 0m0.721s 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.445 ************************************ 00:07:39.445 END TEST raid_state_function_test 00:07:39.445 ************************************ 00:07:39.445 01:08:51 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:39.445 01:08:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:39.445 01:08:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.445 01:08:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:39.445 ************************************ 00:07:39.445 START TEST raid_state_function_test_sb 00:07:39.445 ************************************ 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73981 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73981' 00:07:39.445 Process raid pid: 73981 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73981 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73981 ']' 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.445 01:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.445 [2024-10-15 01:08:52.062002] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:07:39.445 [2024-10-15 01:08:52.062234] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.713 [2024-10-15 01:08:52.206647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.713 [2024-10-15 01:08:52.233775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.713 [2024-10-15 01:08:52.276389] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.713 [2024-10-15 01:08:52.276500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.282 [2024-10-15 01:08:52.889900] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:40.282 [2024-10-15 01:08:52.889958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:40.282 [2024-10-15 01:08:52.889987] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:40.282 [2024-10-15 01:08:52.889998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.282 "name": "Existed_Raid", 00:07:40.282 "uuid": "3d287399-e54d-46f5-99d6-370783523229", 00:07:40.282 "strip_size_kb": 0, 00:07:40.282 "state": "configuring", 00:07:40.282 "raid_level": "raid1", 00:07:40.282 "superblock": true, 00:07:40.282 "num_base_bdevs": 2, 00:07:40.282 "num_base_bdevs_discovered": 0, 00:07:40.282 "num_base_bdevs_operational": 2, 00:07:40.282 "base_bdevs_list": [ 00:07:40.282 { 00:07:40.282 "name": "BaseBdev1", 00:07:40.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.282 "is_configured": false, 00:07:40.282 "data_offset": 0, 00:07:40.282 "data_size": 0 00:07:40.282 }, 00:07:40.282 { 00:07:40.282 "name": "BaseBdev2", 00:07:40.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.282 "is_configured": false, 00:07:40.282 "data_offset": 0, 00:07:40.282 "data_size": 0 00:07:40.282 } 00:07:40.282 ] 00:07:40.282 }' 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.282 01:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.852 [2024-10-15 01:08:53.321084] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:40.852 [2024-10-15 01:08:53.321192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.852 [2024-10-15 01:08:53.333072] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:40.852 [2024-10-15 01:08:53.333145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:40.852 [2024-10-15 01:08:53.333173] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:40.852 [2024-10-15 01:08:53.333232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.852 [2024-10-15 01:08:53.353856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:40.852 BaseBdev1 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.852 [ 00:07:40.852 { 00:07:40.852 "name": "BaseBdev1", 00:07:40.852 "aliases": [ 00:07:40.852 "92ddc8aa-ebec-4c28-ac88-7f436063d020" 00:07:40.852 ], 00:07:40.852 "product_name": "Malloc disk", 00:07:40.852 "block_size": 512, 00:07:40.852 "num_blocks": 65536, 00:07:40.852 "uuid": "92ddc8aa-ebec-4c28-ac88-7f436063d020", 00:07:40.852 "assigned_rate_limits": { 00:07:40.852 "rw_ios_per_sec": 0, 00:07:40.852 "rw_mbytes_per_sec": 0, 00:07:40.852 "r_mbytes_per_sec": 0, 00:07:40.852 "w_mbytes_per_sec": 0 00:07:40.852 }, 00:07:40.852 "claimed": true, 00:07:40.852 "claim_type": "exclusive_write", 00:07:40.852 "zoned": false, 00:07:40.852 "supported_io_types": { 00:07:40.852 "read": true, 00:07:40.852 "write": true, 00:07:40.852 "unmap": true, 00:07:40.852 "flush": true, 00:07:40.852 "reset": true, 00:07:40.852 "nvme_admin": false, 00:07:40.852 "nvme_io": false, 00:07:40.852 "nvme_io_md": false, 00:07:40.852 "write_zeroes": true, 00:07:40.852 "zcopy": true, 00:07:40.852 "get_zone_info": false, 00:07:40.852 "zone_management": false, 00:07:40.852 "zone_append": false, 00:07:40.852 "compare": false, 00:07:40.852 "compare_and_write": false, 00:07:40.852 "abort": true, 00:07:40.852 "seek_hole": false, 00:07:40.852 "seek_data": false, 00:07:40.852 "copy": true, 00:07:40.852 "nvme_iov_md": false 00:07:40.852 }, 00:07:40.852 "memory_domains": [ 00:07:40.852 { 00:07:40.852 "dma_device_id": "system", 00:07:40.852 "dma_device_type": 1 00:07:40.852 }, 00:07:40.852 { 00:07:40.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.852 "dma_device_type": 2 00:07:40.852 } 00:07:40.852 ], 00:07:40.852 "driver_specific": {} 00:07:40.852 } 00:07:40.852 ] 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.852 "name": "Existed_Raid", 00:07:40.852 "uuid": "fa6f1305-b76c-4cb9-8181-b13075218bf0", 00:07:40.852 "strip_size_kb": 0, 00:07:40.852 "state": "configuring", 00:07:40.852 "raid_level": "raid1", 00:07:40.852 "superblock": true, 00:07:40.852 "num_base_bdevs": 2, 00:07:40.852 "num_base_bdevs_discovered": 1, 00:07:40.852 "num_base_bdevs_operational": 2, 00:07:40.852 "base_bdevs_list": [ 00:07:40.852 { 00:07:40.852 "name": "BaseBdev1", 00:07:40.852 "uuid": "92ddc8aa-ebec-4c28-ac88-7f436063d020", 00:07:40.852 "is_configured": true, 00:07:40.852 "data_offset": 2048, 00:07:40.852 "data_size": 63488 00:07:40.852 }, 00:07:40.852 { 00:07:40.852 "name": "BaseBdev2", 00:07:40.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.852 "is_configured": false, 00:07:40.852 "data_offset": 0, 00:07:40.852 "data_size": 0 00:07:40.852 } 00:07:40.852 ] 00:07:40.852 }' 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.852 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.420 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:41.420 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.421 [2024-10-15 01:08:53.853109] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:41.421 [2024-10-15 01:08:53.853207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.421 [2024-10-15 01:08:53.865138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.421 [2024-10-15 01:08:53.866967] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.421 [2024-10-15 01:08:53.867041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.421 "name": "Existed_Raid", 00:07:41.421 "uuid": "df5ac485-064c-4e14-84bf-4be1b84807b4", 00:07:41.421 "strip_size_kb": 0, 00:07:41.421 "state": "configuring", 00:07:41.421 "raid_level": "raid1", 00:07:41.421 "superblock": true, 00:07:41.421 "num_base_bdevs": 2, 00:07:41.421 "num_base_bdevs_discovered": 1, 00:07:41.421 "num_base_bdevs_operational": 2, 00:07:41.421 "base_bdevs_list": [ 00:07:41.421 { 00:07:41.421 "name": "BaseBdev1", 00:07:41.421 "uuid": "92ddc8aa-ebec-4c28-ac88-7f436063d020", 00:07:41.421 "is_configured": true, 00:07:41.421 "data_offset": 2048, 00:07:41.421 "data_size": 63488 00:07:41.421 }, 00:07:41.421 { 00:07:41.421 "name": "BaseBdev2", 00:07:41.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.421 "is_configured": false, 00:07:41.421 "data_offset": 0, 00:07:41.421 "data_size": 0 00:07:41.421 } 00:07:41.421 ] 00:07:41.421 }' 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.421 01:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.681 [2024-10-15 01:08:54.295352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:41.681 [2024-10-15 01:08:54.295539] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:41.681 [2024-10-15 01:08:54.295554] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:41.681 [2024-10-15 01:08:54.295798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:41.681 BaseBdev2 00:07:41.681 [2024-10-15 01:08:54.295935] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:41.681 [2024-10-15 01:08:54.295958] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:41.681 [2024-10-15 01:08:54.296103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.681 [ 00:07:41.681 { 00:07:41.681 "name": "BaseBdev2", 00:07:41.681 "aliases": [ 00:07:41.681 "e2e6ca48-ee89-4ad8-a90b-0610217c8e74" 00:07:41.681 ], 00:07:41.681 "product_name": "Malloc disk", 00:07:41.681 "block_size": 512, 00:07:41.681 "num_blocks": 65536, 00:07:41.681 "uuid": "e2e6ca48-ee89-4ad8-a90b-0610217c8e74", 00:07:41.681 "assigned_rate_limits": { 00:07:41.681 "rw_ios_per_sec": 0, 00:07:41.681 "rw_mbytes_per_sec": 0, 00:07:41.681 "r_mbytes_per_sec": 0, 00:07:41.681 "w_mbytes_per_sec": 0 00:07:41.681 }, 00:07:41.681 "claimed": true, 00:07:41.681 "claim_type": "exclusive_write", 00:07:41.681 "zoned": false, 00:07:41.681 "supported_io_types": { 00:07:41.681 "read": true, 00:07:41.681 "write": true, 00:07:41.681 "unmap": true, 00:07:41.681 "flush": true, 00:07:41.681 "reset": true, 00:07:41.681 "nvme_admin": false, 00:07:41.681 "nvme_io": false, 00:07:41.681 "nvme_io_md": false, 00:07:41.681 "write_zeroes": true, 00:07:41.681 "zcopy": true, 00:07:41.681 "get_zone_info": false, 00:07:41.681 "zone_management": false, 00:07:41.681 "zone_append": false, 00:07:41.681 "compare": false, 00:07:41.681 "compare_and_write": false, 00:07:41.681 "abort": true, 00:07:41.681 "seek_hole": false, 00:07:41.681 "seek_data": false, 00:07:41.681 "copy": true, 00:07:41.681 "nvme_iov_md": false 00:07:41.681 }, 00:07:41.681 "memory_domains": [ 00:07:41.681 { 00:07:41.681 "dma_device_id": "system", 00:07:41.681 "dma_device_type": 1 00:07:41.681 }, 00:07:41.681 { 00:07:41.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.681 "dma_device_type": 2 00:07:41.681 } 00:07:41.681 ], 00:07:41.681 "driver_specific": {} 00:07:41.681 } 00:07:41.681 ] 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.681 "name": "Existed_Raid", 00:07:41.681 "uuid": "df5ac485-064c-4e14-84bf-4be1b84807b4", 00:07:41.681 "strip_size_kb": 0, 00:07:41.681 "state": "online", 00:07:41.681 "raid_level": "raid1", 00:07:41.681 "superblock": true, 00:07:41.681 "num_base_bdevs": 2, 00:07:41.681 "num_base_bdevs_discovered": 2, 00:07:41.681 "num_base_bdevs_operational": 2, 00:07:41.681 "base_bdevs_list": [ 00:07:41.681 { 00:07:41.681 "name": "BaseBdev1", 00:07:41.681 "uuid": "92ddc8aa-ebec-4c28-ac88-7f436063d020", 00:07:41.681 "is_configured": true, 00:07:41.681 "data_offset": 2048, 00:07:41.681 "data_size": 63488 00:07:41.681 }, 00:07:41.681 { 00:07:41.681 "name": "BaseBdev2", 00:07:41.681 "uuid": "e2e6ca48-ee89-4ad8-a90b-0610217c8e74", 00:07:41.681 "is_configured": true, 00:07:41.681 "data_offset": 2048, 00:07:41.681 "data_size": 63488 00:07:41.681 } 00:07:41.681 ] 00:07:41.681 }' 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.681 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.250 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:42.250 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:42.250 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:42.250 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:42.250 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:42.250 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:42.250 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:42.250 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:42.250 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.250 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.250 [2024-10-15 01:08:54.738922] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:42.250 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.250 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:42.250 "name": "Existed_Raid", 00:07:42.250 "aliases": [ 00:07:42.250 "df5ac485-064c-4e14-84bf-4be1b84807b4" 00:07:42.250 ], 00:07:42.250 "product_name": "Raid Volume", 00:07:42.250 "block_size": 512, 00:07:42.250 "num_blocks": 63488, 00:07:42.250 "uuid": "df5ac485-064c-4e14-84bf-4be1b84807b4", 00:07:42.250 "assigned_rate_limits": { 00:07:42.250 "rw_ios_per_sec": 0, 00:07:42.250 "rw_mbytes_per_sec": 0, 00:07:42.250 "r_mbytes_per_sec": 0, 00:07:42.250 "w_mbytes_per_sec": 0 00:07:42.250 }, 00:07:42.250 "claimed": false, 00:07:42.250 "zoned": false, 00:07:42.250 "supported_io_types": { 00:07:42.250 "read": true, 00:07:42.250 "write": true, 00:07:42.250 "unmap": false, 00:07:42.250 "flush": false, 00:07:42.250 "reset": true, 00:07:42.250 "nvme_admin": false, 00:07:42.250 "nvme_io": false, 00:07:42.250 "nvme_io_md": false, 00:07:42.250 "write_zeroes": true, 00:07:42.250 "zcopy": false, 00:07:42.250 "get_zone_info": false, 00:07:42.250 "zone_management": false, 00:07:42.250 "zone_append": false, 00:07:42.250 "compare": false, 00:07:42.250 "compare_and_write": false, 00:07:42.250 "abort": false, 00:07:42.250 "seek_hole": false, 00:07:42.251 "seek_data": false, 00:07:42.251 "copy": false, 00:07:42.251 "nvme_iov_md": false 00:07:42.251 }, 00:07:42.251 "memory_domains": [ 00:07:42.251 { 00:07:42.251 "dma_device_id": "system", 00:07:42.251 "dma_device_type": 1 00:07:42.251 }, 00:07:42.251 { 00:07:42.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.251 "dma_device_type": 2 00:07:42.251 }, 00:07:42.251 { 00:07:42.251 "dma_device_id": "system", 00:07:42.251 "dma_device_type": 1 00:07:42.251 }, 00:07:42.251 { 00:07:42.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.251 "dma_device_type": 2 00:07:42.251 } 00:07:42.251 ], 00:07:42.251 "driver_specific": { 00:07:42.251 "raid": { 00:07:42.251 "uuid": "df5ac485-064c-4e14-84bf-4be1b84807b4", 00:07:42.251 "strip_size_kb": 0, 00:07:42.251 "state": "online", 00:07:42.251 "raid_level": "raid1", 00:07:42.251 "superblock": true, 00:07:42.251 "num_base_bdevs": 2, 00:07:42.251 "num_base_bdevs_discovered": 2, 00:07:42.251 "num_base_bdevs_operational": 2, 00:07:42.251 "base_bdevs_list": [ 00:07:42.251 { 00:07:42.251 "name": "BaseBdev1", 00:07:42.251 "uuid": "92ddc8aa-ebec-4c28-ac88-7f436063d020", 00:07:42.251 "is_configured": true, 00:07:42.251 "data_offset": 2048, 00:07:42.251 "data_size": 63488 00:07:42.251 }, 00:07:42.251 { 00:07:42.251 "name": "BaseBdev2", 00:07:42.251 "uuid": "e2e6ca48-ee89-4ad8-a90b-0610217c8e74", 00:07:42.251 "is_configured": true, 00:07:42.251 "data_offset": 2048, 00:07:42.251 "data_size": 63488 00:07:42.251 } 00:07:42.251 ] 00:07:42.251 } 00:07:42.251 } 00:07:42.251 }' 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:42.251 BaseBdev2' 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.251 [2024-10-15 01:08:54.922414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.251 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.515 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.515 "name": "Existed_Raid", 00:07:42.516 "uuid": "df5ac485-064c-4e14-84bf-4be1b84807b4", 00:07:42.516 "strip_size_kb": 0, 00:07:42.516 "state": "online", 00:07:42.516 "raid_level": "raid1", 00:07:42.516 "superblock": true, 00:07:42.516 "num_base_bdevs": 2, 00:07:42.516 "num_base_bdevs_discovered": 1, 00:07:42.516 "num_base_bdevs_operational": 1, 00:07:42.516 "base_bdevs_list": [ 00:07:42.516 { 00:07:42.516 "name": null, 00:07:42.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.516 "is_configured": false, 00:07:42.516 "data_offset": 0, 00:07:42.516 "data_size": 63488 00:07:42.516 }, 00:07:42.516 { 00:07:42.516 "name": "BaseBdev2", 00:07:42.516 "uuid": "e2e6ca48-ee89-4ad8-a90b-0610217c8e74", 00:07:42.516 "is_configured": true, 00:07:42.516 "data_offset": 2048, 00:07:42.516 "data_size": 63488 00:07:42.516 } 00:07:42.516 ] 00:07:42.516 }' 00:07:42.516 01:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.516 01:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.783 [2024-10-15 01:08:55.428709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:42.783 [2024-10-15 01:08:55.428806] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:42.783 [2024-10-15 01:08:55.440292] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:42.783 [2024-10-15 01:08:55.440343] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:42.783 [2024-10-15 01:08:55.440355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73981 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73981 ']' 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73981 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.783 01:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73981 00:07:43.043 killing process with pid 73981 00:07:43.043 01:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:43.043 01:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:43.043 01:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73981' 00:07:43.043 01:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73981 00:07:43.043 [2024-10-15 01:08:55.527378] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:43.043 01:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73981 00:07:43.043 [2024-10-15 01:08:55.528367] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:43.043 01:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:43.043 00:07:43.043 real 0m3.763s 00:07:43.043 user 0m5.954s 00:07:43.043 sys 0m0.731s 00:07:43.043 ************************************ 00:07:43.043 END TEST raid_state_function_test_sb 00:07:43.043 ************************************ 00:07:43.043 01:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.043 01:08:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.303 01:08:55 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:43.303 01:08:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:43.303 01:08:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.303 01:08:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:43.303 ************************************ 00:07:43.303 START TEST raid_superblock_test 00:07:43.303 ************************************ 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74219 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74219 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74219 ']' 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.303 01:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.303 [2024-10-15 01:08:55.885089] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:07:43.303 [2024-10-15 01:08:55.885246] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74219 ] 00:07:43.563 [2024-10-15 01:08:56.028773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.563 [2024-10-15 01:08:56.055848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.563 [2024-10-15 01:08:56.098437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.563 [2024-10-15 01:08:56.098562] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.132 malloc1 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.132 [2024-10-15 01:08:56.737272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:44.132 [2024-10-15 01:08:56.737378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.132 [2024-10-15 01:08:56.737416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:44.132 [2024-10-15 01:08:56.737447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.132 [2024-10-15 01:08:56.739484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.132 [2024-10-15 01:08:56.739558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:44.132 pt1 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.132 malloc2 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.132 [2024-10-15 01:08:56.769681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:44.132 [2024-10-15 01:08:56.769779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.132 [2024-10-15 01:08:56.769810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:44.132 [2024-10-15 01:08:56.769838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.132 [2024-10-15 01:08:56.771833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.132 [2024-10-15 01:08:56.771903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:44.132 pt2 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.132 01:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.132 [2024-10-15 01:08:56.781697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:44.133 [2024-10-15 01:08:56.783479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:44.133 [2024-10-15 01:08:56.783616] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:44.133 [2024-10-15 01:08:56.783629] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:44.133 [2024-10-15 01:08:56.783872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:44.133 [2024-10-15 01:08:56.784003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:44.133 [2024-10-15 01:08:56.784012] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:44.133 [2024-10-15 01:08:56.784118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.133 01:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.133 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:44.133 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.133 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.133 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.133 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.133 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.133 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.133 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.133 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.133 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.133 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.133 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.133 01:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.133 01:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.133 01:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.133 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.133 "name": "raid_bdev1", 00:07:44.133 "uuid": "66827672-0d45-4c40-95f0-e135f838a4dd", 00:07:44.133 "strip_size_kb": 0, 00:07:44.133 "state": "online", 00:07:44.133 "raid_level": "raid1", 00:07:44.133 "superblock": true, 00:07:44.133 "num_base_bdevs": 2, 00:07:44.133 "num_base_bdevs_discovered": 2, 00:07:44.133 "num_base_bdevs_operational": 2, 00:07:44.133 "base_bdevs_list": [ 00:07:44.133 { 00:07:44.133 "name": "pt1", 00:07:44.133 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.133 "is_configured": true, 00:07:44.133 "data_offset": 2048, 00:07:44.133 "data_size": 63488 00:07:44.133 }, 00:07:44.133 { 00:07:44.133 "name": "pt2", 00:07:44.133 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.133 "is_configured": true, 00:07:44.133 "data_offset": 2048, 00:07:44.133 "data_size": 63488 00:07:44.133 } 00:07:44.133 ] 00:07:44.133 }' 00:07:44.133 01:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.133 01:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.700 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:44.700 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:44.700 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:44.700 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:44.700 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:44.700 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:44.700 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:44.700 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:44.700 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.700 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.700 [2024-10-15 01:08:57.241212] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.700 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.700 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:44.700 "name": "raid_bdev1", 00:07:44.700 "aliases": [ 00:07:44.700 "66827672-0d45-4c40-95f0-e135f838a4dd" 00:07:44.700 ], 00:07:44.700 "product_name": "Raid Volume", 00:07:44.700 "block_size": 512, 00:07:44.700 "num_blocks": 63488, 00:07:44.700 "uuid": "66827672-0d45-4c40-95f0-e135f838a4dd", 00:07:44.700 "assigned_rate_limits": { 00:07:44.700 "rw_ios_per_sec": 0, 00:07:44.700 "rw_mbytes_per_sec": 0, 00:07:44.700 "r_mbytes_per_sec": 0, 00:07:44.700 "w_mbytes_per_sec": 0 00:07:44.700 }, 00:07:44.700 "claimed": false, 00:07:44.700 "zoned": false, 00:07:44.700 "supported_io_types": { 00:07:44.701 "read": true, 00:07:44.701 "write": true, 00:07:44.701 "unmap": false, 00:07:44.701 "flush": false, 00:07:44.701 "reset": true, 00:07:44.701 "nvme_admin": false, 00:07:44.701 "nvme_io": false, 00:07:44.701 "nvme_io_md": false, 00:07:44.701 "write_zeroes": true, 00:07:44.701 "zcopy": false, 00:07:44.701 "get_zone_info": false, 00:07:44.701 "zone_management": false, 00:07:44.701 "zone_append": false, 00:07:44.701 "compare": false, 00:07:44.701 "compare_and_write": false, 00:07:44.701 "abort": false, 00:07:44.701 "seek_hole": false, 00:07:44.701 "seek_data": false, 00:07:44.701 "copy": false, 00:07:44.701 "nvme_iov_md": false 00:07:44.701 }, 00:07:44.701 "memory_domains": [ 00:07:44.701 { 00:07:44.701 "dma_device_id": "system", 00:07:44.701 "dma_device_type": 1 00:07:44.701 }, 00:07:44.701 { 00:07:44.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.701 "dma_device_type": 2 00:07:44.701 }, 00:07:44.701 { 00:07:44.701 "dma_device_id": "system", 00:07:44.701 "dma_device_type": 1 00:07:44.701 }, 00:07:44.701 { 00:07:44.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.701 "dma_device_type": 2 00:07:44.701 } 00:07:44.701 ], 00:07:44.701 "driver_specific": { 00:07:44.701 "raid": { 00:07:44.701 "uuid": "66827672-0d45-4c40-95f0-e135f838a4dd", 00:07:44.701 "strip_size_kb": 0, 00:07:44.701 "state": "online", 00:07:44.701 "raid_level": "raid1", 00:07:44.701 "superblock": true, 00:07:44.701 "num_base_bdevs": 2, 00:07:44.701 "num_base_bdevs_discovered": 2, 00:07:44.701 "num_base_bdevs_operational": 2, 00:07:44.701 "base_bdevs_list": [ 00:07:44.701 { 00:07:44.701 "name": "pt1", 00:07:44.701 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.701 "is_configured": true, 00:07:44.701 "data_offset": 2048, 00:07:44.701 "data_size": 63488 00:07:44.701 }, 00:07:44.701 { 00:07:44.701 "name": "pt2", 00:07:44.701 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.701 "is_configured": true, 00:07:44.701 "data_offset": 2048, 00:07:44.701 "data_size": 63488 00:07:44.701 } 00:07:44.701 ] 00:07:44.701 } 00:07:44.701 } 00:07:44.701 }' 00:07:44.701 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:44.701 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:44.701 pt2' 00:07:44.701 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.701 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:44.701 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.701 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:44.701 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.701 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.701 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.701 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.701 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.701 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.701 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.701 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:44.701 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.701 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.701 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.960 [2024-10-15 01:08:57.468721] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=66827672-0d45-4c40-95f0-e135f838a4dd 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 66827672-0d45-4c40-95f0-e135f838a4dd ']' 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.960 [2024-10-15 01:08:57.512403] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:44.960 [2024-10-15 01:08:57.512427] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.960 [2024-10-15 01:08:57.512502] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.960 [2024-10-15 01:08:57.512566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:44.960 [2024-10-15 01:08:57.512575] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.960 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.961 [2024-10-15 01:08:57.644242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:44.961 [2024-10-15 01:08:57.646026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:44.961 [2024-10-15 01:08:57.646092] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:44.961 [2024-10-15 01:08:57.646141] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:44.961 [2024-10-15 01:08:57.646159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:44.961 [2024-10-15 01:08:57.646168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:44.961 request: 00:07:44.961 { 00:07:44.961 "name": "raid_bdev1", 00:07:44.961 "raid_level": "raid1", 00:07:44.961 "base_bdevs": [ 00:07:44.961 "malloc1", 00:07:44.961 "malloc2" 00:07:44.961 ], 00:07:44.961 "superblock": false, 00:07:44.961 "method": "bdev_raid_create", 00:07:44.961 "req_id": 1 00:07:44.961 } 00:07:44.961 Got JSON-RPC error response 00:07:44.961 response: 00:07:44.961 { 00:07:44.961 "code": -17, 00:07:44.961 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:44.961 } 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.961 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.220 [2024-10-15 01:08:57.708097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:45.220 [2024-10-15 01:08:57.708147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.220 [2024-10-15 01:08:57.708166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:45.220 [2024-10-15 01:08:57.708175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.220 [2024-10-15 01:08:57.710333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.220 [2024-10-15 01:08:57.710436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:45.220 [2024-10-15 01:08:57.710510] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:45.220 [2024-10-15 01:08:57.710541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:45.220 pt1 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.220 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.220 "name": "raid_bdev1", 00:07:45.220 "uuid": "66827672-0d45-4c40-95f0-e135f838a4dd", 00:07:45.220 "strip_size_kb": 0, 00:07:45.220 "state": "configuring", 00:07:45.220 "raid_level": "raid1", 00:07:45.220 "superblock": true, 00:07:45.220 "num_base_bdevs": 2, 00:07:45.220 "num_base_bdevs_discovered": 1, 00:07:45.221 "num_base_bdevs_operational": 2, 00:07:45.221 "base_bdevs_list": [ 00:07:45.221 { 00:07:45.221 "name": "pt1", 00:07:45.221 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:45.221 "is_configured": true, 00:07:45.221 "data_offset": 2048, 00:07:45.221 "data_size": 63488 00:07:45.221 }, 00:07:45.221 { 00:07:45.221 "name": null, 00:07:45.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:45.221 "is_configured": false, 00:07:45.221 "data_offset": 2048, 00:07:45.221 "data_size": 63488 00:07:45.221 } 00:07:45.221 ] 00:07:45.221 }' 00:07:45.221 01:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.221 01:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.484 [2024-10-15 01:08:58.131390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:45.484 [2024-10-15 01:08:58.131487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.484 [2024-10-15 01:08:58.131527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:45.484 [2024-10-15 01:08:58.131556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.484 [2024-10-15 01:08:58.131956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.484 [2024-10-15 01:08:58.132011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:45.484 [2024-10-15 01:08:58.132105] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:45.484 [2024-10-15 01:08:58.132168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:45.484 [2024-10-15 01:08:58.132317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:45.484 [2024-10-15 01:08:58.132359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:45.484 [2024-10-15 01:08:58.132666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:45.484 [2024-10-15 01:08:58.132836] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:45.484 [2024-10-15 01:08:58.132888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:45.484 [2024-10-15 01:08:58.133041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.484 pt2 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.484 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.484 "name": "raid_bdev1", 00:07:45.484 "uuid": "66827672-0d45-4c40-95f0-e135f838a4dd", 00:07:45.485 "strip_size_kb": 0, 00:07:45.485 "state": "online", 00:07:45.485 "raid_level": "raid1", 00:07:45.485 "superblock": true, 00:07:45.485 "num_base_bdevs": 2, 00:07:45.485 "num_base_bdevs_discovered": 2, 00:07:45.485 "num_base_bdevs_operational": 2, 00:07:45.485 "base_bdevs_list": [ 00:07:45.485 { 00:07:45.485 "name": "pt1", 00:07:45.485 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:45.485 "is_configured": true, 00:07:45.485 "data_offset": 2048, 00:07:45.485 "data_size": 63488 00:07:45.485 }, 00:07:45.485 { 00:07:45.485 "name": "pt2", 00:07:45.485 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:45.485 "is_configured": true, 00:07:45.485 "data_offset": 2048, 00:07:45.485 "data_size": 63488 00:07:45.485 } 00:07:45.485 ] 00:07:45.485 }' 00:07:45.485 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.485 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.056 [2024-10-15 01:08:58.543136] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:46.056 "name": "raid_bdev1", 00:07:46.056 "aliases": [ 00:07:46.056 "66827672-0d45-4c40-95f0-e135f838a4dd" 00:07:46.056 ], 00:07:46.056 "product_name": "Raid Volume", 00:07:46.056 "block_size": 512, 00:07:46.056 "num_blocks": 63488, 00:07:46.056 "uuid": "66827672-0d45-4c40-95f0-e135f838a4dd", 00:07:46.056 "assigned_rate_limits": { 00:07:46.056 "rw_ios_per_sec": 0, 00:07:46.056 "rw_mbytes_per_sec": 0, 00:07:46.056 "r_mbytes_per_sec": 0, 00:07:46.056 "w_mbytes_per_sec": 0 00:07:46.056 }, 00:07:46.056 "claimed": false, 00:07:46.056 "zoned": false, 00:07:46.056 "supported_io_types": { 00:07:46.056 "read": true, 00:07:46.056 "write": true, 00:07:46.056 "unmap": false, 00:07:46.056 "flush": false, 00:07:46.056 "reset": true, 00:07:46.056 "nvme_admin": false, 00:07:46.056 "nvme_io": false, 00:07:46.056 "nvme_io_md": false, 00:07:46.056 "write_zeroes": true, 00:07:46.056 "zcopy": false, 00:07:46.056 "get_zone_info": false, 00:07:46.056 "zone_management": false, 00:07:46.056 "zone_append": false, 00:07:46.056 "compare": false, 00:07:46.056 "compare_and_write": false, 00:07:46.056 "abort": false, 00:07:46.056 "seek_hole": false, 00:07:46.056 "seek_data": false, 00:07:46.056 "copy": false, 00:07:46.056 "nvme_iov_md": false 00:07:46.056 }, 00:07:46.056 "memory_domains": [ 00:07:46.056 { 00:07:46.056 "dma_device_id": "system", 00:07:46.056 "dma_device_type": 1 00:07:46.056 }, 00:07:46.056 { 00:07:46.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.056 "dma_device_type": 2 00:07:46.056 }, 00:07:46.056 { 00:07:46.056 "dma_device_id": "system", 00:07:46.056 "dma_device_type": 1 00:07:46.056 }, 00:07:46.056 { 00:07:46.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.056 "dma_device_type": 2 00:07:46.056 } 00:07:46.056 ], 00:07:46.056 "driver_specific": { 00:07:46.056 "raid": { 00:07:46.056 "uuid": "66827672-0d45-4c40-95f0-e135f838a4dd", 00:07:46.056 "strip_size_kb": 0, 00:07:46.056 "state": "online", 00:07:46.056 "raid_level": "raid1", 00:07:46.056 "superblock": true, 00:07:46.056 "num_base_bdevs": 2, 00:07:46.056 "num_base_bdevs_discovered": 2, 00:07:46.056 "num_base_bdevs_operational": 2, 00:07:46.056 "base_bdevs_list": [ 00:07:46.056 { 00:07:46.056 "name": "pt1", 00:07:46.056 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.056 "is_configured": true, 00:07:46.056 "data_offset": 2048, 00:07:46.056 "data_size": 63488 00:07:46.056 }, 00:07:46.056 { 00:07:46.056 "name": "pt2", 00:07:46.056 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.056 "is_configured": true, 00:07:46.056 "data_offset": 2048, 00:07:46.056 "data_size": 63488 00:07:46.056 } 00:07:46.056 ] 00:07:46.056 } 00:07:46.056 } 00:07:46.056 }' 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:46.056 pt2' 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.056 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.057 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.057 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.057 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:46.057 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.057 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.057 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.057 [2024-10-15 01:08:58.770693] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.316 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.316 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 66827672-0d45-4c40-95f0-e135f838a4dd '!=' 66827672-0d45-4c40-95f0-e135f838a4dd ']' 00:07:46.316 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.317 [2024-10-15 01:08:58.822431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.317 "name": "raid_bdev1", 00:07:46.317 "uuid": "66827672-0d45-4c40-95f0-e135f838a4dd", 00:07:46.317 "strip_size_kb": 0, 00:07:46.317 "state": "online", 00:07:46.317 "raid_level": "raid1", 00:07:46.317 "superblock": true, 00:07:46.317 "num_base_bdevs": 2, 00:07:46.317 "num_base_bdevs_discovered": 1, 00:07:46.317 "num_base_bdevs_operational": 1, 00:07:46.317 "base_bdevs_list": [ 00:07:46.317 { 00:07:46.317 "name": null, 00:07:46.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.317 "is_configured": false, 00:07:46.317 "data_offset": 0, 00:07:46.317 "data_size": 63488 00:07:46.317 }, 00:07:46.317 { 00:07:46.317 "name": "pt2", 00:07:46.317 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.317 "is_configured": true, 00:07:46.317 "data_offset": 2048, 00:07:46.317 "data_size": 63488 00:07:46.317 } 00:07:46.317 ] 00:07:46.317 }' 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.317 01:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.576 [2024-10-15 01:08:59.245705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:46.576 [2024-10-15 01:08:59.245778] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:46.576 [2024-10-15 01:08:59.245879] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.576 [2024-10-15 01:08:59.245944] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.576 [2024-10-15 01:08:59.245989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:46.576 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.577 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.836 [2024-10-15 01:08:59.301595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:46.836 [2024-10-15 01:08:59.301688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.836 [2024-10-15 01:08:59.301726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:07:46.836 [2024-10-15 01:08:59.301752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.836 [2024-10-15 01:08:59.303955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.836 [2024-10-15 01:08:59.304042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:46.836 [2024-10-15 01:08:59.304139] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:46.836 [2024-10-15 01:08:59.304214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:46.836 [2024-10-15 01:08:59.304330] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:46.836 [2024-10-15 01:08:59.304367] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:46.837 [2024-10-15 01:08:59.304623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:46.837 [2024-10-15 01:08:59.304762] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:46.837 [2024-10-15 01:08:59.304802] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:07:46.837 [2024-10-15 01:08:59.304936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.837 pt2 00:07:46.837 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.837 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:46.837 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.837 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.837 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.837 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.837 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:46.837 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.837 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.837 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.837 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.837 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.837 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.837 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.837 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.837 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.837 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.837 "name": "raid_bdev1", 00:07:46.837 "uuid": "66827672-0d45-4c40-95f0-e135f838a4dd", 00:07:46.837 "strip_size_kb": 0, 00:07:46.837 "state": "online", 00:07:46.837 "raid_level": "raid1", 00:07:46.837 "superblock": true, 00:07:46.837 "num_base_bdevs": 2, 00:07:46.837 "num_base_bdevs_discovered": 1, 00:07:46.837 "num_base_bdevs_operational": 1, 00:07:46.837 "base_bdevs_list": [ 00:07:46.837 { 00:07:46.837 "name": null, 00:07:46.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.837 "is_configured": false, 00:07:46.837 "data_offset": 2048, 00:07:46.837 "data_size": 63488 00:07:46.837 }, 00:07:46.837 { 00:07:46.837 "name": "pt2", 00:07:46.837 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.837 "is_configured": true, 00:07:46.837 "data_offset": 2048, 00:07:46.837 "data_size": 63488 00:07:46.837 } 00:07:46.837 ] 00:07:46.837 }' 00:07:46.837 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.837 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.097 [2024-10-15 01:08:59.716899] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:47.097 [2024-10-15 01:08:59.716928] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.097 [2024-10-15 01:08:59.716998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.097 [2024-10-15 01:08:59.717043] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.097 [2024-10-15 01:08:59.717054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.097 [2024-10-15 01:08:59.780788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:47.097 [2024-10-15 01:08:59.780897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.097 [2024-10-15 01:08:59.780929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:07:47.097 [2024-10-15 01:08:59.780981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.097 [2024-10-15 01:08:59.783044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.097 [2024-10-15 01:08:59.783116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:47.097 [2024-10-15 01:08:59.783225] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:47.097 [2024-10-15 01:08:59.783304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:47.097 [2024-10-15 01:08:59.783448] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:47.097 [2024-10-15 01:08:59.783507] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:47.097 [2024-10-15 01:08:59.783546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:07:47.097 [2024-10-15 01:08:59.783622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:47.097 [2024-10-15 01:08:59.783731] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:07:47.097 [2024-10-15 01:08:59.783772] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:47.097 [2024-10-15 01:08:59.783991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:07:47.097 [2024-10-15 01:08:59.784138] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:07:47.097 [2024-10-15 01:08:59.784192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:07:47.097 [2024-10-15 01:08:59.784337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.097 pt1 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.097 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.357 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.357 "name": "raid_bdev1", 00:07:47.357 "uuid": "66827672-0d45-4c40-95f0-e135f838a4dd", 00:07:47.357 "strip_size_kb": 0, 00:07:47.357 "state": "online", 00:07:47.357 "raid_level": "raid1", 00:07:47.357 "superblock": true, 00:07:47.357 "num_base_bdevs": 2, 00:07:47.357 "num_base_bdevs_discovered": 1, 00:07:47.357 "num_base_bdevs_operational": 1, 00:07:47.357 "base_bdevs_list": [ 00:07:47.357 { 00:07:47.357 "name": null, 00:07:47.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.357 "is_configured": false, 00:07:47.357 "data_offset": 2048, 00:07:47.357 "data_size": 63488 00:07:47.357 }, 00:07:47.357 { 00:07:47.357 "name": "pt2", 00:07:47.357 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.357 "is_configured": true, 00:07:47.357 "data_offset": 2048, 00:07:47.357 "data_size": 63488 00:07:47.357 } 00:07:47.357 ] 00:07:47.357 }' 00:07:47.357 01:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.357 01:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.616 01:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:47.616 01:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:47.616 01:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.616 01:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.617 01:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.617 01:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:47.617 01:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:47.617 01:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.617 01:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.617 01:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:47.617 [2024-10-15 01:09:00.260274] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.617 01:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.617 01:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 66827672-0d45-4c40-95f0-e135f838a4dd '!=' 66827672-0d45-4c40-95f0-e135f838a4dd ']' 00:07:47.617 01:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74219 00:07:47.617 01:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74219 ']' 00:07:47.617 01:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74219 00:07:47.617 01:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:47.617 01:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:47.617 01:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74219 00:07:47.877 01:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:47.877 01:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:47.877 01:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74219' 00:07:47.877 killing process with pid 74219 00:07:47.877 01:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74219 00:07:47.877 [2024-10-15 01:09:00.347475] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:47.877 [2024-10-15 01:09:00.347604] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.877 01:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74219 00:07:47.877 [2024-10-15 01:09:00.347687] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.877 [2024-10-15 01:09:00.347702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:07:47.877 [2024-10-15 01:09:00.369707] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:47.877 01:09:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:47.877 ************************************ 00:07:47.877 END TEST raid_superblock_test 00:07:47.877 ************************************ 00:07:47.877 00:07:47.877 real 0m4.774s 00:07:47.877 user 0m7.872s 00:07:47.877 sys 0m0.911s 00:07:47.877 01:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.877 01:09:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.137 01:09:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:48.137 01:09:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:48.137 01:09:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.137 01:09:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.137 ************************************ 00:07:48.137 START TEST raid_read_error_test 00:07:48.137 ************************************ 00:07:48.137 01:09:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:07:48.137 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:48.137 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:48.137 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:48.137 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:48.137 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:48.137 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:48.137 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:48.137 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:48.137 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SrbRfc2JzD 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74530 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74530 00:07:48.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 74530 ']' 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.138 01:09:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.138 [2024-10-15 01:09:00.753026] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:07:48.138 [2024-10-15 01:09:00.753240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74530 ] 00:07:48.397 [2024-10-15 01:09:00.897890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.397 [2024-10-15 01:09:00.925149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.397 [2024-10-15 01:09:00.968340] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.397 [2024-10-15 01:09:00.968369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.967 BaseBdev1_malloc 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.967 true 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.967 [2024-10-15 01:09:01.606693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:48.967 [2024-10-15 01:09:01.606746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.967 [2024-10-15 01:09:01.606780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:48.967 [2024-10-15 01:09:01.606789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.967 [2024-10-15 01:09:01.608959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.967 [2024-10-15 01:09:01.608995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:48.967 BaseBdev1 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.967 BaseBdev2_malloc 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.967 true 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.967 [2024-10-15 01:09:01.647281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:48.967 [2024-10-15 01:09:01.647343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.967 [2024-10-15 01:09:01.647360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:48.967 [2024-10-15 01:09:01.647377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.967 [2024-10-15 01:09:01.649390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.967 [2024-10-15 01:09:01.649484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:48.967 BaseBdev2 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.967 [2024-10-15 01:09:01.659349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:48.967 [2024-10-15 01:09:01.661147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:48.967 [2024-10-15 01:09:01.661333] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:48.967 [2024-10-15 01:09:01.661351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:48.967 [2024-10-15 01:09:01.661582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:48.967 [2024-10-15 01:09:01.661720] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:48.967 [2024-10-15 01:09:01.661733] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:48.967 [2024-10-15 01:09:01.661843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:48.967 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:48.968 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.968 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.968 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.968 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.968 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.968 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.968 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.968 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.968 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.968 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.227 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.227 "name": "raid_bdev1", 00:07:49.227 "uuid": "4d9d5965-19c4-4fb4-9878-5dafb1b89108", 00:07:49.227 "strip_size_kb": 0, 00:07:49.227 "state": "online", 00:07:49.227 "raid_level": "raid1", 00:07:49.227 "superblock": true, 00:07:49.227 "num_base_bdevs": 2, 00:07:49.227 "num_base_bdevs_discovered": 2, 00:07:49.227 "num_base_bdevs_operational": 2, 00:07:49.227 "base_bdevs_list": [ 00:07:49.227 { 00:07:49.227 "name": "BaseBdev1", 00:07:49.227 "uuid": "e1b13952-c8b8-5856-951c-ebde111ec2fd", 00:07:49.227 "is_configured": true, 00:07:49.227 "data_offset": 2048, 00:07:49.227 "data_size": 63488 00:07:49.227 }, 00:07:49.227 { 00:07:49.227 "name": "BaseBdev2", 00:07:49.227 "uuid": "dfd3228a-8f49-5210-b2b8-7ceb8e532ea5", 00:07:49.227 "is_configured": true, 00:07:49.227 "data_offset": 2048, 00:07:49.227 "data_size": 63488 00:07:49.227 } 00:07:49.227 ] 00:07:49.227 }' 00:07:49.227 01:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.227 01:09:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.487 01:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:49.487 01:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:49.487 [2024-10-15 01:09:02.174824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:50.462 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:50.462 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.462 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.462 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.462 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:50.462 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:50.463 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:50.463 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:50.463 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:50.463 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.463 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.463 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.463 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.463 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.463 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.463 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.463 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.463 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.463 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.463 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.463 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.463 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.463 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.463 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.463 "name": "raid_bdev1", 00:07:50.463 "uuid": "4d9d5965-19c4-4fb4-9878-5dafb1b89108", 00:07:50.463 "strip_size_kb": 0, 00:07:50.463 "state": "online", 00:07:50.463 "raid_level": "raid1", 00:07:50.463 "superblock": true, 00:07:50.463 "num_base_bdevs": 2, 00:07:50.463 "num_base_bdevs_discovered": 2, 00:07:50.463 "num_base_bdevs_operational": 2, 00:07:50.463 "base_bdevs_list": [ 00:07:50.463 { 00:07:50.463 "name": "BaseBdev1", 00:07:50.463 "uuid": "e1b13952-c8b8-5856-951c-ebde111ec2fd", 00:07:50.463 "is_configured": true, 00:07:50.463 "data_offset": 2048, 00:07:50.463 "data_size": 63488 00:07:50.463 }, 00:07:50.463 { 00:07:50.463 "name": "BaseBdev2", 00:07:50.463 "uuid": "dfd3228a-8f49-5210-b2b8-7ceb8e532ea5", 00:07:50.463 "is_configured": true, 00:07:50.463 "data_offset": 2048, 00:07:50.463 "data_size": 63488 00:07:50.463 } 00:07:50.463 ] 00:07:50.463 }' 00:07:50.463 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.463 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.032 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:51.032 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.032 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.032 [2024-10-15 01:09:03.574647] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:51.032 [2024-10-15 01:09:03.574685] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:51.032 [2024-10-15 01:09:03.577325] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.032 [2024-10-15 01:09:03.577408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.032 [2024-10-15 01:09:03.577523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.032 [2024-10-15 01:09:03.577568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:51.032 { 00:07:51.032 "results": [ 00:07:51.032 { 00:07:51.032 "job": "raid_bdev1", 00:07:51.032 "core_mask": "0x1", 00:07:51.032 "workload": "randrw", 00:07:51.032 "percentage": 50, 00:07:51.032 "status": "finished", 00:07:51.032 "queue_depth": 1, 00:07:51.032 "io_size": 131072, 00:07:51.032 "runtime": 1.400627, 00:07:51.032 "iops": 19853.251436678, 00:07:51.032 "mibps": 2481.65642958475, 00:07:51.032 "io_failed": 0, 00:07:51.032 "io_timeout": 0, 00:07:51.032 "avg_latency_us": 47.85625434706444, 00:07:51.032 "min_latency_us": 21.687336244541484, 00:07:51.032 "max_latency_us": 1473.844541484716 00:07:51.032 } 00:07:51.032 ], 00:07:51.032 "core_count": 1 00:07:51.032 } 00:07:51.032 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.032 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74530 00:07:51.032 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 74530 ']' 00:07:51.032 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 74530 00:07:51.032 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:51.032 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:51.032 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74530 00:07:51.032 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:51.032 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:51.032 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74530' 00:07:51.032 killing process with pid 74530 00:07:51.032 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 74530 00:07:51.033 [2024-10-15 01:09:03.626421] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.033 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 74530 00:07:51.033 [2024-10-15 01:09:03.641503] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:51.292 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SrbRfc2JzD 00:07:51.292 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:51.292 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:51.292 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:51.293 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:51.293 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:51.293 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:51.293 ************************************ 00:07:51.293 END TEST raid_read_error_test 00:07:51.293 ************************************ 00:07:51.293 01:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:51.293 00:07:51.293 real 0m3.196s 00:07:51.293 user 0m4.118s 00:07:51.293 sys 0m0.465s 00:07:51.293 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.293 01:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.293 01:09:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:51.293 01:09:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:51.293 01:09:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.293 01:09:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:51.293 ************************************ 00:07:51.293 START TEST raid_write_error_test 00:07:51.293 ************************************ 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HMPVB7BmDr 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74659 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74659 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 74659 ']' 00:07:51.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.293 01:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.553 [2024-10-15 01:09:04.025057] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:07:51.553 [2024-10-15 01:09:04.025266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74659 ] 00:07:51.553 [2024-10-15 01:09:04.152913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.553 [2024-10-15 01:09:04.178572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.553 [2024-10-15 01:09:04.222015] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.553 [2024-10-15 01:09:04.222134] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.122 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.123 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.383 BaseBdev1_malloc 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.383 true 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.383 [2024-10-15 01:09:04.880837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:52.383 [2024-10-15 01:09:04.880944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.383 [2024-10-15 01:09:04.880977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:52.383 [2024-10-15 01:09:04.880986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.383 [2024-10-15 01:09:04.883024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.383 [2024-10-15 01:09:04.883066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:52.383 BaseBdev1 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.383 BaseBdev2_malloc 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.383 true 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.383 [2024-10-15 01:09:04.921407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:52.383 [2024-10-15 01:09:04.921453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.383 [2024-10-15 01:09:04.921471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:52.383 [2024-10-15 01:09:04.921486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.383 [2024-10-15 01:09:04.923517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.383 [2024-10-15 01:09:04.923599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:52.383 BaseBdev2 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.383 [2024-10-15 01:09:04.933452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.383 [2024-10-15 01:09:04.935268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:52.383 [2024-10-15 01:09:04.935512] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:52.383 [2024-10-15 01:09:04.935554] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:52.383 [2024-10-15 01:09:04.935854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:52.383 [2024-10-15 01:09:04.936022] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:52.383 [2024-10-15 01:09:04.936070] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:52.383 [2024-10-15 01:09:04.936273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.383 "name": "raid_bdev1", 00:07:52.383 "uuid": "a151258f-a0f2-4a03-9eff-97dd616c20b7", 00:07:52.383 "strip_size_kb": 0, 00:07:52.383 "state": "online", 00:07:52.383 "raid_level": "raid1", 00:07:52.383 "superblock": true, 00:07:52.383 "num_base_bdevs": 2, 00:07:52.383 "num_base_bdevs_discovered": 2, 00:07:52.383 "num_base_bdevs_operational": 2, 00:07:52.383 "base_bdevs_list": [ 00:07:52.383 { 00:07:52.383 "name": "BaseBdev1", 00:07:52.383 "uuid": "5b9377d2-599b-5fc8-89f5-5a3aeec55a8b", 00:07:52.383 "is_configured": true, 00:07:52.383 "data_offset": 2048, 00:07:52.383 "data_size": 63488 00:07:52.383 }, 00:07:52.383 { 00:07:52.383 "name": "BaseBdev2", 00:07:52.383 "uuid": "69364b16-efad-59ff-9ede-7e79c6da80cd", 00:07:52.383 "is_configured": true, 00:07:52.383 "data_offset": 2048, 00:07:52.383 "data_size": 63488 00:07:52.383 } 00:07:52.383 ] 00:07:52.383 }' 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.383 01:09:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.953 01:09:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:52.953 01:09:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:52.953 [2024-10-15 01:09:05.468928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.893 [2024-10-15 01:09:06.385280] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:07:53.893 [2024-10-15 01:09:06.385337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:53.893 [2024-10-15 01:09:06.385556] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002530 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.893 "name": "raid_bdev1", 00:07:53.893 "uuid": "a151258f-a0f2-4a03-9eff-97dd616c20b7", 00:07:53.893 "strip_size_kb": 0, 00:07:53.893 "state": "online", 00:07:53.893 "raid_level": "raid1", 00:07:53.893 "superblock": true, 00:07:53.893 "num_base_bdevs": 2, 00:07:53.893 "num_base_bdevs_discovered": 1, 00:07:53.893 "num_base_bdevs_operational": 1, 00:07:53.893 "base_bdevs_list": [ 00:07:53.893 { 00:07:53.893 "name": null, 00:07:53.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.893 "is_configured": false, 00:07:53.893 "data_offset": 0, 00:07:53.893 "data_size": 63488 00:07:53.893 }, 00:07:53.893 { 00:07:53.893 "name": "BaseBdev2", 00:07:53.893 "uuid": "69364b16-efad-59ff-9ede-7e79c6da80cd", 00:07:53.893 "is_configured": true, 00:07:53.893 "data_offset": 2048, 00:07:53.893 "data_size": 63488 00:07:53.893 } 00:07:53.893 ] 00:07:53.893 }' 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.893 01:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.152 01:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:54.152 01:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.152 01:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.153 [2024-10-15 01:09:06.815109] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:54.153 [2024-10-15 01:09:06.815144] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.153 [2024-10-15 01:09:06.817624] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.153 [2024-10-15 01:09:06.817669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.153 [2024-10-15 01:09:06.817721] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.153 [2024-10-15 01:09:06.817732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:54.153 { 00:07:54.153 "results": [ 00:07:54.153 { 00:07:54.153 "job": "raid_bdev1", 00:07:54.153 "core_mask": "0x1", 00:07:54.153 "workload": "randrw", 00:07:54.153 "percentage": 50, 00:07:54.153 "status": "finished", 00:07:54.153 "queue_depth": 1, 00:07:54.153 "io_size": 131072, 00:07:54.153 "runtime": 1.34701, 00:07:54.153 "iops": 23369.53697448423, 00:07:54.153 "mibps": 2921.1921218105285, 00:07:54.153 "io_failed": 0, 00:07:54.153 "io_timeout": 0, 00:07:54.153 "avg_latency_us": 40.269025707996086, 00:07:54.153 "min_latency_us": 21.687336244541484, 00:07:54.153 "max_latency_us": 1337.907423580786 00:07:54.153 } 00:07:54.153 ], 00:07:54.153 "core_count": 1 00:07:54.153 } 00:07:54.153 01:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.153 01:09:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74659 00:07:54.153 01:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 74659 ']' 00:07:54.153 01:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 74659 00:07:54.153 01:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:54.153 01:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:54.153 01:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74659 00:07:54.153 killing process with pid 74659 00:07:54.153 01:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:54.153 01:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:54.153 01:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74659' 00:07:54.153 01:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 74659 00:07:54.153 [2024-10-15 01:09:06.864385] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:54.153 01:09:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 74659 00:07:54.412 [2024-10-15 01:09:06.880262] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.412 01:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:54.412 01:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HMPVB7BmDr 00:07:54.412 01:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:54.412 01:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:54.412 01:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:54.412 ************************************ 00:07:54.412 END TEST raid_write_error_test 00:07:54.412 ************************************ 00:07:54.412 01:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:54.412 01:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:54.412 01:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:54.412 00:07:54.412 real 0m3.167s 00:07:54.412 user 0m4.044s 00:07:54.412 sys 0m0.471s 00:07:54.412 01:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.412 01:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.672 01:09:07 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:54.672 01:09:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:54.672 01:09:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:07:54.672 01:09:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:54.672 01:09:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.672 01:09:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.672 ************************************ 00:07:54.672 START TEST raid_state_function_test 00:07:54.672 ************************************ 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74792 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74792' 00:07:54.672 Process raid pid: 74792 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74792 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 74792 ']' 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.672 01:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.672 [2024-10-15 01:09:07.253587] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:07:54.672 [2024-10-15 01:09:07.253790] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.932 [2024-10-15 01:09:07.399864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.932 [2024-10-15 01:09:07.427363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.932 [2024-10-15 01:09:07.470279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.932 [2024-10-15 01:09:07.470311] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.501 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.501 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:55.501 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:55.501 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.501 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.501 [2024-10-15 01:09:08.080068] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:55.501 [2024-10-15 01:09:08.080209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:55.501 [2024-10-15 01:09:08.080246] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:55.501 [2024-10-15 01:09:08.080271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:55.501 [2024-10-15 01:09:08.080291] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:55.501 [2024-10-15 01:09:08.080344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:55.501 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.501 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:55.501 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.501 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.501 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.501 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.501 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:55.501 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.501 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.501 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.501 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.501 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.501 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.501 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.501 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.502 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.502 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.502 "name": "Existed_Raid", 00:07:55.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.502 "strip_size_kb": 64, 00:07:55.502 "state": "configuring", 00:07:55.502 "raid_level": "raid0", 00:07:55.502 "superblock": false, 00:07:55.502 "num_base_bdevs": 3, 00:07:55.502 "num_base_bdevs_discovered": 0, 00:07:55.502 "num_base_bdevs_operational": 3, 00:07:55.502 "base_bdevs_list": [ 00:07:55.502 { 00:07:55.502 "name": "BaseBdev1", 00:07:55.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.502 "is_configured": false, 00:07:55.502 "data_offset": 0, 00:07:55.502 "data_size": 0 00:07:55.502 }, 00:07:55.502 { 00:07:55.502 "name": "BaseBdev2", 00:07:55.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.502 "is_configured": false, 00:07:55.502 "data_offset": 0, 00:07:55.502 "data_size": 0 00:07:55.502 }, 00:07:55.502 { 00:07:55.502 "name": "BaseBdev3", 00:07:55.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.502 "is_configured": false, 00:07:55.502 "data_offset": 0, 00:07:55.502 "data_size": 0 00:07:55.502 } 00:07:55.502 ] 00:07:55.502 }' 00:07:55.502 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.502 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.070 [2024-10-15 01:09:08.543215] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:56.070 [2024-10-15 01:09:08.543322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.070 [2024-10-15 01:09:08.555216] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:56.070 [2024-10-15 01:09:08.555256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:56.070 [2024-10-15 01:09:08.555264] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:56.070 [2024-10-15 01:09:08.555273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:56.070 [2024-10-15 01:09:08.555279] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:56.070 [2024-10-15 01:09:08.555288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.070 [2024-10-15 01:09:08.576118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:56.070 BaseBdev1 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.070 [ 00:07:56.070 { 00:07:56.070 "name": "BaseBdev1", 00:07:56.070 "aliases": [ 00:07:56.070 "1a78277b-caf9-4fcf-9cd4-2285da0abdc8" 00:07:56.070 ], 00:07:56.070 "product_name": "Malloc disk", 00:07:56.070 "block_size": 512, 00:07:56.070 "num_blocks": 65536, 00:07:56.070 "uuid": "1a78277b-caf9-4fcf-9cd4-2285da0abdc8", 00:07:56.070 "assigned_rate_limits": { 00:07:56.070 "rw_ios_per_sec": 0, 00:07:56.070 "rw_mbytes_per_sec": 0, 00:07:56.070 "r_mbytes_per_sec": 0, 00:07:56.070 "w_mbytes_per_sec": 0 00:07:56.070 }, 00:07:56.070 "claimed": true, 00:07:56.070 "claim_type": "exclusive_write", 00:07:56.070 "zoned": false, 00:07:56.070 "supported_io_types": { 00:07:56.070 "read": true, 00:07:56.070 "write": true, 00:07:56.070 "unmap": true, 00:07:56.070 "flush": true, 00:07:56.070 "reset": true, 00:07:56.070 "nvme_admin": false, 00:07:56.070 "nvme_io": false, 00:07:56.070 "nvme_io_md": false, 00:07:56.070 "write_zeroes": true, 00:07:56.070 "zcopy": true, 00:07:56.070 "get_zone_info": false, 00:07:56.070 "zone_management": false, 00:07:56.070 "zone_append": false, 00:07:56.070 "compare": false, 00:07:56.070 "compare_and_write": false, 00:07:56.070 "abort": true, 00:07:56.070 "seek_hole": false, 00:07:56.070 "seek_data": false, 00:07:56.070 "copy": true, 00:07:56.070 "nvme_iov_md": false 00:07:56.070 }, 00:07:56.070 "memory_domains": [ 00:07:56.070 { 00:07:56.070 "dma_device_id": "system", 00:07:56.070 "dma_device_type": 1 00:07:56.070 }, 00:07:56.070 { 00:07:56.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.070 "dma_device_type": 2 00:07:56.070 } 00:07:56.070 ], 00:07:56.070 "driver_specific": {} 00:07:56.070 } 00:07:56.070 ] 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.070 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.070 "name": "Existed_Raid", 00:07:56.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.070 "strip_size_kb": 64, 00:07:56.070 "state": "configuring", 00:07:56.070 "raid_level": "raid0", 00:07:56.070 "superblock": false, 00:07:56.070 "num_base_bdevs": 3, 00:07:56.070 "num_base_bdevs_discovered": 1, 00:07:56.070 "num_base_bdevs_operational": 3, 00:07:56.070 "base_bdevs_list": [ 00:07:56.070 { 00:07:56.070 "name": "BaseBdev1", 00:07:56.070 "uuid": "1a78277b-caf9-4fcf-9cd4-2285da0abdc8", 00:07:56.070 "is_configured": true, 00:07:56.070 "data_offset": 0, 00:07:56.070 "data_size": 65536 00:07:56.070 }, 00:07:56.070 { 00:07:56.070 "name": "BaseBdev2", 00:07:56.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.071 "is_configured": false, 00:07:56.071 "data_offset": 0, 00:07:56.071 "data_size": 0 00:07:56.071 }, 00:07:56.071 { 00:07:56.071 "name": "BaseBdev3", 00:07:56.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.071 "is_configured": false, 00:07:56.071 "data_offset": 0, 00:07:56.071 "data_size": 0 00:07:56.071 } 00:07:56.071 ] 00:07:56.071 }' 00:07:56.071 01:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.071 01:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.329 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:56.329 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.329 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.329 [2024-10-15 01:09:09.039410] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:56.329 [2024-10-15 01:09:09.039462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:56.329 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.329 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:56.329 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.329 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.329 [2024-10-15 01:09:09.051443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:56.588 [2024-10-15 01:09:09.053480] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:56.588 [2024-10-15 01:09:09.053527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:56.588 [2024-10-15 01:09:09.053538] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:56.588 [2024-10-15 01:09:09.053549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:56.588 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.588 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:56.588 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:56.588 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:56.588 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.588 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.588 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.588 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.588 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:56.588 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.588 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.588 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.588 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.588 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.588 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.588 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.588 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.588 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.588 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.588 "name": "Existed_Raid", 00:07:56.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.588 "strip_size_kb": 64, 00:07:56.588 "state": "configuring", 00:07:56.588 "raid_level": "raid0", 00:07:56.588 "superblock": false, 00:07:56.588 "num_base_bdevs": 3, 00:07:56.588 "num_base_bdevs_discovered": 1, 00:07:56.588 "num_base_bdevs_operational": 3, 00:07:56.588 "base_bdevs_list": [ 00:07:56.588 { 00:07:56.588 "name": "BaseBdev1", 00:07:56.588 "uuid": "1a78277b-caf9-4fcf-9cd4-2285da0abdc8", 00:07:56.588 "is_configured": true, 00:07:56.588 "data_offset": 0, 00:07:56.588 "data_size": 65536 00:07:56.588 }, 00:07:56.588 { 00:07:56.588 "name": "BaseBdev2", 00:07:56.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.588 "is_configured": false, 00:07:56.588 "data_offset": 0, 00:07:56.588 "data_size": 0 00:07:56.588 }, 00:07:56.588 { 00:07:56.588 "name": "BaseBdev3", 00:07:56.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.588 "is_configured": false, 00:07:56.588 "data_offset": 0, 00:07:56.588 "data_size": 0 00:07:56.588 } 00:07:56.588 ] 00:07:56.588 }' 00:07:56.588 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.588 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.848 [2024-10-15 01:09:09.493719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:56.848 BaseBdev2 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.848 [ 00:07:56.848 { 00:07:56.848 "name": "BaseBdev2", 00:07:56.848 "aliases": [ 00:07:56.848 "65230ee2-16f0-409a-aa6e-1aefb62060e4" 00:07:56.848 ], 00:07:56.848 "product_name": "Malloc disk", 00:07:56.848 "block_size": 512, 00:07:56.848 "num_blocks": 65536, 00:07:56.848 "uuid": "65230ee2-16f0-409a-aa6e-1aefb62060e4", 00:07:56.848 "assigned_rate_limits": { 00:07:56.848 "rw_ios_per_sec": 0, 00:07:56.848 "rw_mbytes_per_sec": 0, 00:07:56.848 "r_mbytes_per_sec": 0, 00:07:56.848 "w_mbytes_per_sec": 0 00:07:56.848 }, 00:07:56.848 "claimed": true, 00:07:56.848 "claim_type": "exclusive_write", 00:07:56.848 "zoned": false, 00:07:56.848 "supported_io_types": { 00:07:56.848 "read": true, 00:07:56.848 "write": true, 00:07:56.848 "unmap": true, 00:07:56.848 "flush": true, 00:07:56.848 "reset": true, 00:07:56.848 "nvme_admin": false, 00:07:56.848 "nvme_io": false, 00:07:56.848 "nvme_io_md": false, 00:07:56.848 "write_zeroes": true, 00:07:56.848 "zcopy": true, 00:07:56.848 "get_zone_info": false, 00:07:56.848 "zone_management": false, 00:07:56.848 "zone_append": false, 00:07:56.848 "compare": false, 00:07:56.848 "compare_and_write": false, 00:07:56.848 "abort": true, 00:07:56.848 "seek_hole": false, 00:07:56.848 "seek_data": false, 00:07:56.848 "copy": true, 00:07:56.848 "nvme_iov_md": false 00:07:56.848 }, 00:07:56.848 "memory_domains": [ 00:07:56.848 { 00:07:56.848 "dma_device_id": "system", 00:07:56.848 "dma_device_type": 1 00:07:56.848 }, 00:07:56.848 { 00:07:56.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.848 "dma_device_type": 2 00:07:56.848 } 00:07:56.848 ], 00:07:56.848 "driver_specific": {} 00:07:56.848 } 00:07:56.848 ] 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.848 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.107 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.107 "name": "Existed_Raid", 00:07:57.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.107 "strip_size_kb": 64, 00:07:57.107 "state": "configuring", 00:07:57.107 "raid_level": "raid0", 00:07:57.107 "superblock": false, 00:07:57.107 "num_base_bdevs": 3, 00:07:57.107 "num_base_bdevs_discovered": 2, 00:07:57.107 "num_base_bdevs_operational": 3, 00:07:57.107 "base_bdevs_list": [ 00:07:57.107 { 00:07:57.107 "name": "BaseBdev1", 00:07:57.107 "uuid": "1a78277b-caf9-4fcf-9cd4-2285da0abdc8", 00:07:57.107 "is_configured": true, 00:07:57.107 "data_offset": 0, 00:07:57.107 "data_size": 65536 00:07:57.107 }, 00:07:57.107 { 00:07:57.107 "name": "BaseBdev2", 00:07:57.107 "uuid": "65230ee2-16f0-409a-aa6e-1aefb62060e4", 00:07:57.107 "is_configured": true, 00:07:57.107 "data_offset": 0, 00:07:57.107 "data_size": 65536 00:07:57.107 }, 00:07:57.107 { 00:07:57.107 "name": "BaseBdev3", 00:07:57.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.107 "is_configured": false, 00:07:57.107 "data_offset": 0, 00:07:57.107 "data_size": 0 00:07:57.107 } 00:07:57.107 ] 00:07:57.107 }' 00:07:57.107 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.107 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.366 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:57.366 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.366 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.366 [2024-10-15 01:09:09.988743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:57.366 [2024-10-15 01:09:09.988793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:57.366 [2024-10-15 01:09:09.988806] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:57.366 [2024-10-15 01:09:09.989130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:57.366 [2024-10-15 01:09:09.989318] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:57.366 [2024-10-15 01:09:09.989339] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:57.366 [2024-10-15 01:09:09.989552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.366 BaseBdev3 00:07:57.366 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.366 01:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:57.366 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:57.366 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:57.366 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:57.366 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:57.366 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:57.367 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:57.367 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.367 01:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.367 [ 00:07:57.367 { 00:07:57.367 "name": "BaseBdev3", 00:07:57.367 "aliases": [ 00:07:57.367 "3fa5d01e-06e3-4d49-b2a2-b4bd8cad7201" 00:07:57.367 ], 00:07:57.367 "product_name": "Malloc disk", 00:07:57.367 "block_size": 512, 00:07:57.367 "num_blocks": 65536, 00:07:57.367 "uuid": "3fa5d01e-06e3-4d49-b2a2-b4bd8cad7201", 00:07:57.367 "assigned_rate_limits": { 00:07:57.367 "rw_ios_per_sec": 0, 00:07:57.367 "rw_mbytes_per_sec": 0, 00:07:57.367 "r_mbytes_per_sec": 0, 00:07:57.367 "w_mbytes_per_sec": 0 00:07:57.367 }, 00:07:57.367 "claimed": true, 00:07:57.367 "claim_type": "exclusive_write", 00:07:57.367 "zoned": false, 00:07:57.367 "supported_io_types": { 00:07:57.367 "read": true, 00:07:57.367 "write": true, 00:07:57.367 "unmap": true, 00:07:57.367 "flush": true, 00:07:57.367 "reset": true, 00:07:57.367 "nvme_admin": false, 00:07:57.367 "nvme_io": false, 00:07:57.367 "nvme_io_md": false, 00:07:57.367 "write_zeroes": true, 00:07:57.367 "zcopy": true, 00:07:57.367 "get_zone_info": false, 00:07:57.367 "zone_management": false, 00:07:57.367 "zone_append": false, 00:07:57.367 "compare": false, 00:07:57.367 "compare_and_write": false, 00:07:57.367 "abort": true, 00:07:57.367 "seek_hole": false, 00:07:57.367 "seek_data": false, 00:07:57.367 "copy": true, 00:07:57.367 "nvme_iov_md": false 00:07:57.367 }, 00:07:57.367 "memory_domains": [ 00:07:57.367 { 00:07:57.367 "dma_device_id": "system", 00:07:57.367 "dma_device_type": 1 00:07:57.367 }, 00:07:57.367 { 00:07:57.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.367 "dma_device_type": 2 00:07:57.367 } 00:07:57.367 ], 00:07:57.367 "driver_specific": {} 00:07:57.367 } 00:07:57.367 ] 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.367 "name": "Existed_Raid", 00:07:57.367 "uuid": "4631f571-ea53-41b1-bd67-a26e2db6ac9b", 00:07:57.367 "strip_size_kb": 64, 00:07:57.367 "state": "online", 00:07:57.367 "raid_level": "raid0", 00:07:57.367 "superblock": false, 00:07:57.367 "num_base_bdevs": 3, 00:07:57.367 "num_base_bdevs_discovered": 3, 00:07:57.367 "num_base_bdevs_operational": 3, 00:07:57.367 "base_bdevs_list": [ 00:07:57.367 { 00:07:57.367 "name": "BaseBdev1", 00:07:57.367 "uuid": "1a78277b-caf9-4fcf-9cd4-2285da0abdc8", 00:07:57.367 "is_configured": true, 00:07:57.367 "data_offset": 0, 00:07:57.367 "data_size": 65536 00:07:57.367 }, 00:07:57.367 { 00:07:57.367 "name": "BaseBdev2", 00:07:57.367 "uuid": "65230ee2-16f0-409a-aa6e-1aefb62060e4", 00:07:57.367 "is_configured": true, 00:07:57.367 "data_offset": 0, 00:07:57.367 "data_size": 65536 00:07:57.367 }, 00:07:57.367 { 00:07:57.367 "name": "BaseBdev3", 00:07:57.367 "uuid": "3fa5d01e-06e3-4d49-b2a2-b4bd8cad7201", 00:07:57.367 "is_configured": true, 00:07:57.367 "data_offset": 0, 00:07:57.367 "data_size": 65536 00:07:57.367 } 00:07:57.367 ] 00:07:57.367 }' 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.367 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.947 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:57.947 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:57.947 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:57.947 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:57.947 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:57.947 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:57.947 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:57.947 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.947 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.947 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:57.947 [2024-10-15 01:09:10.500234] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.947 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.947 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:57.947 "name": "Existed_Raid", 00:07:57.947 "aliases": [ 00:07:57.947 "4631f571-ea53-41b1-bd67-a26e2db6ac9b" 00:07:57.947 ], 00:07:57.947 "product_name": "Raid Volume", 00:07:57.947 "block_size": 512, 00:07:57.947 "num_blocks": 196608, 00:07:57.947 "uuid": "4631f571-ea53-41b1-bd67-a26e2db6ac9b", 00:07:57.947 "assigned_rate_limits": { 00:07:57.947 "rw_ios_per_sec": 0, 00:07:57.947 "rw_mbytes_per_sec": 0, 00:07:57.947 "r_mbytes_per_sec": 0, 00:07:57.947 "w_mbytes_per_sec": 0 00:07:57.947 }, 00:07:57.947 "claimed": false, 00:07:57.947 "zoned": false, 00:07:57.947 "supported_io_types": { 00:07:57.947 "read": true, 00:07:57.947 "write": true, 00:07:57.947 "unmap": true, 00:07:57.947 "flush": true, 00:07:57.947 "reset": true, 00:07:57.947 "nvme_admin": false, 00:07:57.947 "nvme_io": false, 00:07:57.947 "nvme_io_md": false, 00:07:57.947 "write_zeroes": true, 00:07:57.947 "zcopy": false, 00:07:57.947 "get_zone_info": false, 00:07:57.947 "zone_management": false, 00:07:57.947 "zone_append": false, 00:07:57.947 "compare": false, 00:07:57.947 "compare_and_write": false, 00:07:57.947 "abort": false, 00:07:57.947 "seek_hole": false, 00:07:57.947 "seek_data": false, 00:07:57.947 "copy": false, 00:07:57.948 "nvme_iov_md": false 00:07:57.948 }, 00:07:57.948 "memory_domains": [ 00:07:57.948 { 00:07:57.948 "dma_device_id": "system", 00:07:57.948 "dma_device_type": 1 00:07:57.948 }, 00:07:57.948 { 00:07:57.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.948 "dma_device_type": 2 00:07:57.948 }, 00:07:57.948 { 00:07:57.948 "dma_device_id": "system", 00:07:57.948 "dma_device_type": 1 00:07:57.948 }, 00:07:57.948 { 00:07:57.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.948 "dma_device_type": 2 00:07:57.948 }, 00:07:57.948 { 00:07:57.948 "dma_device_id": "system", 00:07:57.948 "dma_device_type": 1 00:07:57.948 }, 00:07:57.948 { 00:07:57.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.948 "dma_device_type": 2 00:07:57.948 } 00:07:57.948 ], 00:07:57.948 "driver_specific": { 00:07:57.948 "raid": { 00:07:57.948 "uuid": "4631f571-ea53-41b1-bd67-a26e2db6ac9b", 00:07:57.948 "strip_size_kb": 64, 00:07:57.948 "state": "online", 00:07:57.948 "raid_level": "raid0", 00:07:57.948 "superblock": false, 00:07:57.948 "num_base_bdevs": 3, 00:07:57.948 "num_base_bdevs_discovered": 3, 00:07:57.948 "num_base_bdevs_operational": 3, 00:07:57.948 "base_bdevs_list": [ 00:07:57.948 { 00:07:57.948 "name": "BaseBdev1", 00:07:57.948 "uuid": "1a78277b-caf9-4fcf-9cd4-2285da0abdc8", 00:07:57.948 "is_configured": true, 00:07:57.948 "data_offset": 0, 00:07:57.948 "data_size": 65536 00:07:57.948 }, 00:07:57.948 { 00:07:57.948 "name": "BaseBdev2", 00:07:57.948 "uuid": "65230ee2-16f0-409a-aa6e-1aefb62060e4", 00:07:57.948 "is_configured": true, 00:07:57.948 "data_offset": 0, 00:07:57.948 "data_size": 65536 00:07:57.948 }, 00:07:57.948 { 00:07:57.948 "name": "BaseBdev3", 00:07:57.948 "uuid": "3fa5d01e-06e3-4d49-b2a2-b4bd8cad7201", 00:07:57.948 "is_configured": true, 00:07:57.948 "data_offset": 0, 00:07:57.948 "data_size": 65536 00:07:57.948 } 00:07:57.948 ] 00:07:57.948 } 00:07:57.948 } 00:07:57.948 }' 00:07:57.948 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:57.948 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:57.948 BaseBdev2 00:07:57.948 BaseBdev3' 00:07:57.948 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.948 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:57.948 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.948 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:57.948 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.948 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.948 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.219 [2024-10-15 01:09:10.799457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:58.219 [2024-10-15 01:09:10.799490] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:58.219 [2024-10-15 01:09:10.799545] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.219 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.220 "name": "Existed_Raid", 00:07:58.220 "uuid": "4631f571-ea53-41b1-bd67-a26e2db6ac9b", 00:07:58.220 "strip_size_kb": 64, 00:07:58.220 "state": "offline", 00:07:58.220 "raid_level": "raid0", 00:07:58.220 "superblock": false, 00:07:58.220 "num_base_bdevs": 3, 00:07:58.220 "num_base_bdevs_discovered": 2, 00:07:58.220 "num_base_bdevs_operational": 2, 00:07:58.220 "base_bdevs_list": [ 00:07:58.220 { 00:07:58.220 "name": null, 00:07:58.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.220 "is_configured": false, 00:07:58.220 "data_offset": 0, 00:07:58.220 "data_size": 65536 00:07:58.220 }, 00:07:58.220 { 00:07:58.220 "name": "BaseBdev2", 00:07:58.220 "uuid": "65230ee2-16f0-409a-aa6e-1aefb62060e4", 00:07:58.220 "is_configured": true, 00:07:58.220 "data_offset": 0, 00:07:58.220 "data_size": 65536 00:07:58.220 }, 00:07:58.220 { 00:07:58.220 "name": "BaseBdev3", 00:07:58.220 "uuid": "3fa5d01e-06e3-4d49-b2a2-b4bd8cad7201", 00:07:58.220 "is_configured": true, 00:07:58.220 "data_offset": 0, 00:07:58.220 "data_size": 65536 00:07:58.220 } 00:07:58.220 ] 00:07:58.220 }' 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.220 01:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.789 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.790 [2024-10-15 01:09:11.301893] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.790 [2024-10-15 01:09:11.356950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:58.790 [2024-10-15 01:09:11.357001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.790 BaseBdev2 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.790 [ 00:07:58.790 { 00:07:58.790 "name": "BaseBdev2", 00:07:58.790 "aliases": [ 00:07:58.790 "f14551a5-5881-46de-a267-5efdc31e468b" 00:07:58.790 ], 00:07:58.790 "product_name": "Malloc disk", 00:07:58.790 "block_size": 512, 00:07:58.790 "num_blocks": 65536, 00:07:58.790 "uuid": "f14551a5-5881-46de-a267-5efdc31e468b", 00:07:58.790 "assigned_rate_limits": { 00:07:58.790 "rw_ios_per_sec": 0, 00:07:58.790 "rw_mbytes_per_sec": 0, 00:07:58.790 "r_mbytes_per_sec": 0, 00:07:58.790 "w_mbytes_per_sec": 0 00:07:58.790 }, 00:07:58.790 "claimed": false, 00:07:58.790 "zoned": false, 00:07:58.790 "supported_io_types": { 00:07:58.790 "read": true, 00:07:58.790 "write": true, 00:07:58.790 "unmap": true, 00:07:58.790 "flush": true, 00:07:58.790 "reset": true, 00:07:58.790 "nvme_admin": false, 00:07:58.790 "nvme_io": false, 00:07:58.790 "nvme_io_md": false, 00:07:58.790 "write_zeroes": true, 00:07:58.790 "zcopy": true, 00:07:58.790 "get_zone_info": false, 00:07:58.790 "zone_management": false, 00:07:58.790 "zone_append": false, 00:07:58.790 "compare": false, 00:07:58.790 "compare_and_write": false, 00:07:58.790 "abort": true, 00:07:58.790 "seek_hole": false, 00:07:58.790 "seek_data": false, 00:07:58.790 "copy": true, 00:07:58.790 "nvme_iov_md": false 00:07:58.790 }, 00:07:58.790 "memory_domains": [ 00:07:58.790 { 00:07:58.790 "dma_device_id": "system", 00:07:58.790 "dma_device_type": 1 00:07:58.790 }, 00:07:58.790 { 00:07:58.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.790 "dma_device_type": 2 00:07:58.790 } 00:07:58.790 ], 00:07:58.790 "driver_specific": {} 00:07:58.790 } 00:07:58.790 ] 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.790 BaseBdev3 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.790 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.790 [ 00:07:58.790 { 00:07:58.790 "name": "BaseBdev3", 00:07:58.790 "aliases": [ 00:07:58.790 "25cf33fd-8e03-47fa-bed0-d440000ad7ad" 00:07:58.790 ], 00:07:58.791 "product_name": "Malloc disk", 00:07:58.791 "block_size": 512, 00:07:58.791 "num_blocks": 65536, 00:07:58.791 "uuid": "25cf33fd-8e03-47fa-bed0-d440000ad7ad", 00:07:58.791 "assigned_rate_limits": { 00:07:58.791 "rw_ios_per_sec": 0, 00:07:58.791 "rw_mbytes_per_sec": 0, 00:07:58.791 "r_mbytes_per_sec": 0, 00:07:58.791 "w_mbytes_per_sec": 0 00:07:58.791 }, 00:07:58.791 "claimed": false, 00:07:58.791 "zoned": false, 00:07:58.791 "supported_io_types": { 00:07:58.791 "read": true, 00:07:58.791 "write": true, 00:07:58.791 "unmap": true, 00:07:58.791 "flush": true, 00:07:58.791 "reset": true, 00:07:58.791 "nvme_admin": false, 00:07:58.791 "nvme_io": false, 00:07:58.791 "nvme_io_md": false, 00:07:58.791 "write_zeroes": true, 00:07:58.791 "zcopy": true, 00:07:58.791 "get_zone_info": false, 00:07:58.791 "zone_management": false, 00:07:58.791 "zone_append": false, 00:07:58.791 "compare": false, 00:07:59.050 "compare_and_write": false, 00:07:59.050 "abort": true, 00:07:59.050 "seek_hole": false, 00:07:59.050 "seek_data": false, 00:07:59.050 "copy": true, 00:07:59.050 "nvme_iov_md": false 00:07:59.050 }, 00:07:59.050 "memory_domains": [ 00:07:59.050 { 00:07:59.050 "dma_device_id": "system", 00:07:59.050 "dma_device_type": 1 00:07:59.050 }, 00:07:59.050 { 00:07:59.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.050 "dma_device_type": 2 00:07:59.050 } 00:07:59.050 ], 00:07:59.050 "driver_specific": {} 00:07:59.050 } 00:07:59.050 ] 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.050 [2024-10-15 01:09:11.524105] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:59.050 [2024-10-15 01:09:11.524150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:59.050 [2024-10-15 01:09:11.524187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:59.050 [2024-10-15 01:09:11.526001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.050 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.050 "name": "Existed_Raid", 00:07:59.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.050 "strip_size_kb": 64, 00:07:59.050 "state": "configuring", 00:07:59.050 "raid_level": "raid0", 00:07:59.050 "superblock": false, 00:07:59.050 "num_base_bdevs": 3, 00:07:59.050 "num_base_bdevs_discovered": 2, 00:07:59.050 "num_base_bdevs_operational": 3, 00:07:59.050 "base_bdevs_list": [ 00:07:59.050 { 00:07:59.050 "name": "BaseBdev1", 00:07:59.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.050 "is_configured": false, 00:07:59.050 "data_offset": 0, 00:07:59.050 "data_size": 0 00:07:59.050 }, 00:07:59.050 { 00:07:59.050 "name": "BaseBdev2", 00:07:59.050 "uuid": "f14551a5-5881-46de-a267-5efdc31e468b", 00:07:59.050 "is_configured": true, 00:07:59.050 "data_offset": 0, 00:07:59.050 "data_size": 65536 00:07:59.050 }, 00:07:59.051 { 00:07:59.051 "name": "BaseBdev3", 00:07:59.051 "uuid": "25cf33fd-8e03-47fa-bed0-d440000ad7ad", 00:07:59.051 "is_configured": true, 00:07:59.051 "data_offset": 0, 00:07:59.051 "data_size": 65536 00:07:59.051 } 00:07:59.051 ] 00:07:59.051 }' 00:07:59.051 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.051 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.309 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:59.309 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.309 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.309 [2024-10-15 01:09:11.975391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:59.309 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.309 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:59.309 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.309 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.309 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.309 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.309 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:59.309 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.309 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.309 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.309 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.309 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.309 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.309 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.309 01:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.309 01:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.567 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.568 "name": "Existed_Raid", 00:07:59.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.568 "strip_size_kb": 64, 00:07:59.568 "state": "configuring", 00:07:59.568 "raid_level": "raid0", 00:07:59.568 "superblock": false, 00:07:59.568 "num_base_bdevs": 3, 00:07:59.568 "num_base_bdevs_discovered": 1, 00:07:59.568 "num_base_bdevs_operational": 3, 00:07:59.568 "base_bdevs_list": [ 00:07:59.568 { 00:07:59.568 "name": "BaseBdev1", 00:07:59.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.568 "is_configured": false, 00:07:59.568 "data_offset": 0, 00:07:59.568 "data_size": 0 00:07:59.568 }, 00:07:59.568 { 00:07:59.568 "name": null, 00:07:59.568 "uuid": "f14551a5-5881-46de-a267-5efdc31e468b", 00:07:59.568 "is_configured": false, 00:07:59.568 "data_offset": 0, 00:07:59.568 "data_size": 65536 00:07:59.568 }, 00:07:59.568 { 00:07:59.568 "name": "BaseBdev3", 00:07:59.568 "uuid": "25cf33fd-8e03-47fa-bed0-d440000ad7ad", 00:07:59.568 "is_configured": true, 00:07:59.568 "data_offset": 0, 00:07:59.568 "data_size": 65536 00:07:59.568 } 00:07:59.568 ] 00:07:59.568 }' 00:07:59.568 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.568 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.827 [2024-10-15 01:09:12.465528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:59.827 BaseBdev1 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.827 [ 00:07:59.827 { 00:07:59.827 "name": "BaseBdev1", 00:07:59.827 "aliases": [ 00:07:59.827 "75ec87d2-1a5a-4c90-bb61-2e15c577f870" 00:07:59.827 ], 00:07:59.827 "product_name": "Malloc disk", 00:07:59.827 "block_size": 512, 00:07:59.827 "num_blocks": 65536, 00:07:59.827 "uuid": "75ec87d2-1a5a-4c90-bb61-2e15c577f870", 00:07:59.827 "assigned_rate_limits": { 00:07:59.827 "rw_ios_per_sec": 0, 00:07:59.827 "rw_mbytes_per_sec": 0, 00:07:59.827 "r_mbytes_per_sec": 0, 00:07:59.827 "w_mbytes_per_sec": 0 00:07:59.827 }, 00:07:59.827 "claimed": true, 00:07:59.827 "claim_type": "exclusive_write", 00:07:59.827 "zoned": false, 00:07:59.827 "supported_io_types": { 00:07:59.827 "read": true, 00:07:59.827 "write": true, 00:07:59.827 "unmap": true, 00:07:59.827 "flush": true, 00:07:59.827 "reset": true, 00:07:59.827 "nvme_admin": false, 00:07:59.827 "nvme_io": false, 00:07:59.827 "nvme_io_md": false, 00:07:59.827 "write_zeroes": true, 00:07:59.827 "zcopy": true, 00:07:59.827 "get_zone_info": false, 00:07:59.827 "zone_management": false, 00:07:59.827 "zone_append": false, 00:07:59.827 "compare": false, 00:07:59.827 "compare_and_write": false, 00:07:59.827 "abort": true, 00:07:59.827 "seek_hole": false, 00:07:59.827 "seek_data": false, 00:07:59.827 "copy": true, 00:07:59.827 "nvme_iov_md": false 00:07:59.827 }, 00:07:59.827 "memory_domains": [ 00:07:59.827 { 00:07:59.827 "dma_device_id": "system", 00:07:59.827 "dma_device_type": 1 00:07:59.827 }, 00:07:59.827 { 00:07:59.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.827 "dma_device_type": 2 00:07:59.827 } 00:07:59.827 ], 00:07:59.827 "driver_specific": {} 00:07:59.827 } 00:07:59.827 ] 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.827 "name": "Existed_Raid", 00:07:59.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.827 "strip_size_kb": 64, 00:07:59.827 "state": "configuring", 00:07:59.827 "raid_level": "raid0", 00:07:59.827 "superblock": false, 00:07:59.827 "num_base_bdevs": 3, 00:07:59.827 "num_base_bdevs_discovered": 2, 00:07:59.827 "num_base_bdevs_operational": 3, 00:07:59.827 "base_bdevs_list": [ 00:07:59.827 { 00:07:59.827 "name": "BaseBdev1", 00:07:59.827 "uuid": "75ec87d2-1a5a-4c90-bb61-2e15c577f870", 00:07:59.827 "is_configured": true, 00:07:59.827 "data_offset": 0, 00:07:59.827 "data_size": 65536 00:07:59.827 }, 00:07:59.827 { 00:07:59.827 "name": null, 00:07:59.827 "uuid": "f14551a5-5881-46de-a267-5efdc31e468b", 00:07:59.827 "is_configured": false, 00:07:59.827 "data_offset": 0, 00:07:59.827 "data_size": 65536 00:07:59.827 }, 00:07:59.827 { 00:07:59.827 "name": "BaseBdev3", 00:07:59.827 "uuid": "25cf33fd-8e03-47fa-bed0-d440000ad7ad", 00:07:59.827 "is_configured": true, 00:07:59.827 "data_offset": 0, 00:07:59.827 "data_size": 65536 00:07:59.827 } 00:07:59.827 ] 00:07:59.827 }' 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.827 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.396 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.396 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.396 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.396 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:00.396 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.396 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:00.396 01:09:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:00.396 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.396 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.396 [2024-10-15 01:09:12.996691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:00.396 01:09:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.396 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:00.396 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.396 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.396 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.396 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.396 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.396 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.396 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.396 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.396 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.396 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.396 01:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.396 01:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.396 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.396 01:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.396 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.396 "name": "Existed_Raid", 00:08:00.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.396 "strip_size_kb": 64, 00:08:00.396 "state": "configuring", 00:08:00.396 "raid_level": "raid0", 00:08:00.396 "superblock": false, 00:08:00.396 "num_base_bdevs": 3, 00:08:00.396 "num_base_bdevs_discovered": 1, 00:08:00.396 "num_base_bdevs_operational": 3, 00:08:00.396 "base_bdevs_list": [ 00:08:00.396 { 00:08:00.396 "name": "BaseBdev1", 00:08:00.396 "uuid": "75ec87d2-1a5a-4c90-bb61-2e15c577f870", 00:08:00.396 "is_configured": true, 00:08:00.396 "data_offset": 0, 00:08:00.396 "data_size": 65536 00:08:00.396 }, 00:08:00.396 { 00:08:00.396 "name": null, 00:08:00.397 "uuid": "f14551a5-5881-46de-a267-5efdc31e468b", 00:08:00.397 "is_configured": false, 00:08:00.397 "data_offset": 0, 00:08:00.397 "data_size": 65536 00:08:00.397 }, 00:08:00.397 { 00:08:00.397 "name": null, 00:08:00.397 "uuid": "25cf33fd-8e03-47fa-bed0-d440000ad7ad", 00:08:00.397 "is_configured": false, 00:08:00.397 "data_offset": 0, 00:08:00.397 "data_size": 65536 00:08:00.397 } 00:08:00.397 ] 00:08:00.397 }' 00:08:00.397 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.397 01:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.964 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.964 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:00.964 01:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.964 01:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.964 01:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.964 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:00.964 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:00.964 01:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.964 01:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.964 [2024-10-15 01:09:13.511839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:00.964 01:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.964 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:00.964 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.964 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.964 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.964 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.964 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.964 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.964 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.965 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.965 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.965 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.965 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.965 01:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.965 01:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.965 01:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.965 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.965 "name": "Existed_Raid", 00:08:00.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.965 "strip_size_kb": 64, 00:08:00.965 "state": "configuring", 00:08:00.965 "raid_level": "raid0", 00:08:00.965 "superblock": false, 00:08:00.965 "num_base_bdevs": 3, 00:08:00.965 "num_base_bdevs_discovered": 2, 00:08:00.965 "num_base_bdevs_operational": 3, 00:08:00.965 "base_bdevs_list": [ 00:08:00.965 { 00:08:00.965 "name": "BaseBdev1", 00:08:00.965 "uuid": "75ec87d2-1a5a-4c90-bb61-2e15c577f870", 00:08:00.965 "is_configured": true, 00:08:00.965 "data_offset": 0, 00:08:00.965 "data_size": 65536 00:08:00.965 }, 00:08:00.965 { 00:08:00.965 "name": null, 00:08:00.965 "uuid": "f14551a5-5881-46de-a267-5efdc31e468b", 00:08:00.965 "is_configured": false, 00:08:00.965 "data_offset": 0, 00:08:00.965 "data_size": 65536 00:08:00.965 }, 00:08:00.965 { 00:08:00.965 "name": "BaseBdev3", 00:08:00.965 "uuid": "25cf33fd-8e03-47fa-bed0-d440000ad7ad", 00:08:00.965 "is_configured": true, 00:08:00.965 "data_offset": 0, 00:08:00.965 "data_size": 65536 00:08:00.965 } 00:08:00.965 ] 00:08:00.965 }' 00:08:00.965 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.965 01:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.532 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.532 01:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.532 01:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.532 01:09:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:01.532 01:09:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.532 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:01.532 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:01.532 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.532 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.532 [2024-10-15 01:09:14.031018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:01.532 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.532 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:01.532 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.532 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.532 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.532 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.532 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.532 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.532 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.532 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.532 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.532 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.532 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.532 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.532 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.532 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.532 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.532 "name": "Existed_Raid", 00:08:01.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.532 "strip_size_kb": 64, 00:08:01.532 "state": "configuring", 00:08:01.532 "raid_level": "raid0", 00:08:01.532 "superblock": false, 00:08:01.532 "num_base_bdevs": 3, 00:08:01.532 "num_base_bdevs_discovered": 1, 00:08:01.532 "num_base_bdevs_operational": 3, 00:08:01.532 "base_bdevs_list": [ 00:08:01.532 { 00:08:01.532 "name": null, 00:08:01.532 "uuid": "75ec87d2-1a5a-4c90-bb61-2e15c577f870", 00:08:01.532 "is_configured": false, 00:08:01.532 "data_offset": 0, 00:08:01.532 "data_size": 65536 00:08:01.532 }, 00:08:01.532 { 00:08:01.532 "name": null, 00:08:01.532 "uuid": "f14551a5-5881-46de-a267-5efdc31e468b", 00:08:01.532 "is_configured": false, 00:08:01.532 "data_offset": 0, 00:08:01.533 "data_size": 65536 00:08:01.533 }, 00:08:01.533 { 00:08:01.533 "name": "BaseBdev3", 00:08:01.533 "uuid": "25cf33fd-8e03-47fa-bed0-d440000ad7ad", 00:08:01.533 "is_configured": true, 00:08:01.533 "data_offset": 0, 00:08:01.533 "data_size": 65536 00:08:01.533 } 00:08:01.533 ] 00:08:01.533 }' 00:08:01.533 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.533 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.792 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:01.792 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.792 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.792 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.051 [2024-10-15 01:09:14.528703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.051 "name": "Existed_Raid", 00:08:02.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.051 "strip_size_kb": 64, 00:08:02.051 "state": "configuring", 00:08:02.051 "raid_level": "raid0", 00:08:02.051 "superblock": false, 00:08:02.051 "num_base_bdevs": 3, 00:08:02.051 "num_base_bdevs_discovered": 2, 00:08:02.051 "num_base_bdevs_operational": 3, 00:08:02.051 "base_bdevs_list": [ 00:08:02.051 { 00:08:02.051 "name": null, 00:08:02.051 "uuid": "75ec87d2-1a5a-4c90-bb61-2e15c577f870", 00:08:02.051 "is_configured": false, 00:08:02.051 "data_offset": 0, 00:08:02.051 "data_size": 65536 00:08:02.051 }, 00:08:02.051 { 00:08:02.051 "name": "BaseBdev2", 00:08:02.051 "uuid": "f14551a5-5881-46de-a267-5efdc31e468b", 00:08:02.051 "is_configured": true, 00:08:02.051 "data_offset": 0, 00:08:02.051 "data_size": 65536 00:08:02.051 }, 00:08:02.051 { 00:08:02.051 "name": "BaseBdev3", 00:08:02.051 "uuid": "25cf33fd-8e03-47fa-bed0-d440000ad7ad", 00:08:02.051 "is_configured": true, 00:08:02.051 "data_offset": 0, 00:08:02.051 "data_size": 65536 00:08:02.051 } 00:08:02.051 ] 00:08:02.051 }' 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.051 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.310 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:02.310 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.310 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.310 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.310 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.310 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:02.310 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:02.310 01:09:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.310 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.310 01:09:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.310 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.310 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 75ec87d2-1a5a-4c90-bb61-2e15c577f870 00:08:02.310 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.310 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.569 [2024-10-15 01:09:15.034987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:02.569 [2024-10-15 01:09:15.035029] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:02.569 [2024-10-15 01:09:15.035039] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:02.569 [2024-10-15 01:09:15.035308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:02.569 [2024-10-15 01:09:15.035460] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:02.569 [2024-10-15 01:09:15.035549] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:02.569 [2024-10-15 01:09:15.035768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.569 NewBaseBdev 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.569 [ 00:08:02.569 { 00:08:02.569 "name": "NewBaseBdev", 00:08:02.569 "aliases": [ 00:08:02.569 "75ec87d2-1a5a-4c90-bb61-2e15c577f870" 00:08:02.569 ], 00:08:02.569 "product_name": "Malloc disk", 00:08:02.569 "block_size": 512, 00:08:02.569 "num_blocks": 65536, 00:08:02.569 "uuid": "75ec87d2-1a5a-4c90-bb61-2e15c577f870", 00:08:02.569 "assigned_rate_limits": { 00:08:02.569 "rw_ios_per_sec": 0, 00:08:02.569 "rw_mbytes_per_sec": 0, 00:08:02.569 "r_mbytes_per_sec": 0, 00:08:02.569 "w_mbytes_per_sec": 0 00:08:02.569 }, 00:08:02.569 "claimed": true, 00:08:02.569 "claim_type": "exclusive_write", 00:08:02.569 "zoned": false, 00:08:02.569 "supported_io_types": { 00:08:02.569 "read": true, 00:08:02.569 "write": true, 00:08:02.569 "unmap": true, 00:08:02.569 "flush": true, 00:08:02.569 "reset": true, 00:08:02.569 "nvme_admin": false, 00:08:02.569 "nvme_io": false, 00:08:02.569 "nvme_io_md": false, 00:08:02.569 "write_zeroes": true, 00:08:02.569 "zcopy": true, 00:08:02.569 "get_zone_info": false, 00:08:02.569 "zone_management": false, 00:08:02.569 "zone_append": false, 00:08:02.569 "compare": false, 00:08:02.569 "compare_and_write": false, 00:08:02.569 "abort": true, 00:08:02.569 "seek_hole": false, 00:08:02.569 "seek_data": false, 00:08:02.569 "copy": true, 00:08:02.569 "nvme_iov_md": false 00:08:02.569 }, 00:08:02.569 "memory_domains": [ 00:08:02.569 { 00:08:02.569 "dma_device_id": "system", 00:08:02.569 "dma_device_type": 1 00:08:02.569 }, 00:08:02.569 { 00:08:02.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.569 "dma_device_type": 2 00:08:02.569 } 00:08:02.569 ], 00:08:02.569 "driver_specific": {} 00:08:02.569 } 00:08:02.569 ] 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.569 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.570 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.570 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.570 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.570 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.570 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.570 "name": "Existed_Raid", 00:08:02.570 "uuid": "a491fdfe-de7d-474f-ab7d-de3434a55dda", 00:08:02.570 "strip_size_kb": 64, 00:08:02.570 "state": "online", 00:08:02.570 "raid_level": "raid0", 00:08:02.570 "superblock": false, 00:08:02.570 "num_base_bdevs": 3, 00:08:02.570 "num_base_bdevs_discovered": 3, 00:08:02.570 "num_base_bdevs_operational": 3, 00:08:02.570 "base_bdevs_list": [ 00:08:02.570 { 00:08:02.570 "name": "NewBaseBdev", 00:08:02.570 "uuid": "75ec87d2-1a5a-4c90-bb61-2e15c577f870", 00:08:02.570 "is_configured": true, 00:08:02.570 "data_offset": 0, 00:08:02.570 "data_size": 65536 00:08:02.570 }, 00:08:02.570 { 00:08:02.570 "name": "BaseBdev2", 00:08:02.570 "uuid": "f14551a5-5881-46de-a267-5efdc31e468b", 00:08:02.570 "is_configured": true, 00:08:02.570 "data_offset": 0, 00:08:02.570 "data_size": 65536 00:08:02.570 }, 00:08:02.570 { 00:08:02.570 "name": "BaseBdev3", 00:08:02.570 "uuid": "25cf33fd-8e03-47fa-bed0-d440000ad7ad", 00:08:02.570 "is_configured": true, 00:08:02.570 "data_offset": 0, 00:08:02.570 "data_size": 65536 00:08:02.570 } 00:08:02.570 ] 00:08:02.570 }' 00:08:02.570 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.570 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.829 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:02.829 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:02.829 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:02.829 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:02.829 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:02.829 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:02.829 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:02.829 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.829 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.829 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:02.829 [2024-10-15 01:09:15.542451] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:03.090 "name": "Existed_Raid", 00:08:03.090 "aliases": [ 00:08:03.090 "a491fdfe-de7d-474f-ab7d-de3434a55dda" 00:08:03.090 ], 00:08:03.090 "product_name": "Raid Volume", 00:08:03.090 "block_size": 512, 00:08:03.090 "num_blocks": 196608, 00:08:03.090 "uuid": "a491fdfe-de7d-474f-ab7d-de3434a55dda", 00:08:03.090 "assigned_rate_limits": { 00:08:03.090 "rw_ios_per_sec": 0, 00:08:03.090 "rw_mbytes_per_sec": 0, 00:08:03.090 "r_mbytes_per_sec": 0, 00:08:03.090 "w_mbytes_per_sec": 0 00:08:03.090 }, 00:08:03.090 "claimed": false, 00:08:03.090 "zoned": false, 00:08:03.090 "supported_io_types": { 00:08:03.090 "read": true, 00:08:03.090 "write": true, 00:08:03.090 "unmap": true, 00:08:03.090 "flush": true, 00:08:03.090 "reset": true, 00:08:03.090 "nvme_admin": false, 00:08:03.090 "nvme_io": false, 00:08:03.090 "nvme_io_md": false, 00:08:03.090 "write_zeroes": true, 00:08:03.090 "zcopy": false, 00:08:03.090 "get_zone_info": false, 00:08:03.090 "zone_management": false, 00:08:03.090 "zone_append": false, 00:08:03.090 "compare": false, 00:08:03.090 "compare_and_write": false, 00:08:03.090 "abort": false, 00:08:03.090 "seek_hole": false, 00:08:03.090 "seek_data": false, 00:08:03.090 "copy": false, 00:08:03.090 "nvme_iov_md": false 00:08:03.090 }, 00:08:03.090 "memory_domains": [ 00:08:03.090 { 00:08:03.090 "dma_device_id": "system", 00:08:03.090 "dma_device_type": 1 00:08:03.090 }, 00:08:03.090 { 00:08:03.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.090 "dma_device_type": 2 00:08:03.090 }, 00:08:03.090 { 00:08:03.090 "dma_device_id": "system", 00:08:03.090 "dma_device_type": 1 00:08:03.090 }, 00:08:03.090 { 00:08:03.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.090 "dma_device_type": 2 00:08:03.090 }, 00:08:03.090 { 00:08:03.090 "dma_device_id": "system", 00:08:03.090 "dma_device_type": 1 00:08:03.090 }, 00:08:03.090 { 00:08:03.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.090 "dma_device_type": 2 00:08:03.090 } 00:08:03.090 ], 00:08:03.090 "driver_specific": { 00:08:03.090 "raid": { 00:08:03.090 "uuid": "a491fdfe-de7d-474f-ab7d-de3434a55dda", 00:08:03.090 "strip_size_kb": 64, 00:08:03.090 "state": "online", 00:08:03.090 "raid_level": "raid0", 00:08:03.090 "superblock": false, 00:08:03.090 "num_base_bdevs": 3, 00:08:03.090 "num_base_bdevs_discovered": 3, 00:08:03.090 "num_base_bdevs_operational": 3, 00:08:03.090 "base_bdevs_list": [ 00:08:03.090 { 00:08:03.090 "name": "NewBaseBdev", 00:08:03.090 "uuid": "75ec87d2-1a5a-4c90-bb61-2e15c577f870", 00:08:03.090 "is_configured": true, 00:08:03.090 "data_offset": 0, 00:08:03.090 "data_size": 65536 00:08:03.090 }, 00:08:03.090 { 00:08:03.090 "name": "BaseBdev2", 00:08:03.090 "uuid": "f14551a5-5881-46de-a267-5efdc31e468b", 00:08:03.090 "is_configured": true, 00:08:03.090 "data_offset": 0, 00:08:03.090 "data_size": 65536 00:08:03.090 }, 00:08:03.090 { 00:08:03.090 "name": "BaseBdev3", 00:08:03.090 "uuid": "25cf33fd-8e03-47fa-bed0-d440000ad7ad", 00:08:03.090 "is_configured": true, 00:08:03.090 "data_offset": 0, 00:08:03.090 "data_size": 65536 00:08:03.090 } 00:08:03.090 ] 00:08:03.090 } 00:08:03.090 } 00:08:03.090 }' 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:03.090 BaseBdev2 00:08:03.090 BaseBdev3' 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.090 [2024-10-15 01:09:15.797722] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:03.090 [2024-10-15 01:09:15.797750] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:03.090 [2024-10-15 01:09:15.797814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.090 [2024-10-15 01:09:15.797863] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:03.090 [2024-10-15 01:09:15.797874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74792 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 74792 ']' 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 74792 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:03.090 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:03.351 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74792 00:08:03.351 killing process with pid 74792 00:08:03.351 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:03.351 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:03.351 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74792' 00:08:03.351 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 74792 00:08:03.351 [2024-10-15 01:09:15.846147] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:03.351 01:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 74792 00:08:03.351 [2024-10-15 01:09:15.877622] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:03.612 00:08:03.612 real 0m8.928s 00:08:03.612 user 0m15.372s 00:08:03.612 sys 0m1.641s 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.612 ************************************ 00:08:03.612 END TEST raid_state_function_test 00:08:03.612 ************************************ 00:08:03.612 01:09:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:03.612 01:09:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:03.612 01:09:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.612 01:09:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:03.612 ************************************ 00:08:03.612 START TEST raid_state_function_test_sb 00:08:03.612 ************************************ 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:03.612 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:03.613 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:03.613 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75396 00:08:03.613 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:03.613 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75396' 00:08:03.613 Process raid pid: 75396 00:08:03.613 01:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75396 00:08:03.613 01:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75396 ']' 00:08:03.613 01:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.613 01:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:03.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.613 01:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.613 01:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:03.613 01:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.613 [2024-10-15 01:09:16.250771] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:08:03.613 [2024-10-15 01:09:16.250908] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.872 [2024-10-15 01:09:16.396511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.872 [2024-10-15 01:09:16.423353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.872 [2024-10-15 01:09:16.465597] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.872 [2024-10-15 01:09:16.465635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.441 [2024-10-15 01:09:17.083298] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:04.441 [2024-10-15 01:09:17.083374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:04.441 [2024-10-15 01:09:17.083384] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.441 [2024-10-15 01:09:17.083394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.441 [2024-10-15 01:09:17.083401] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:04.441 [2024-10-15 01:09:17.083414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.441 "name": "Existed_Raid", 00:08:04.441 "uuid": "bfc144e5-e88a-4b02-bc2a-115bda6649d1", 00:08:04.441 "strip_size_kb": 64, 00:08:04.441 "state": "configuring", 00:08:04.441 "raid_level": "raid0", 00:08:04.441 "superblock": true, 00:08:04.441 "num_base_bdevs": 3, 00:08:04.441 "num_base_bdevs_discovered": 0, 00:08:04.441 "num_base_bdevs_operational": 3, 00:08:04.441 "base_bdevs_list": [ 00:08:04.441 { 00:08:04.441 "name": "BaseBdev1", 00:08:04.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.441 "is_configured": false, 00:08:04.441 "data_offset": 0, 00:08:04.441 "data_size": 0 00:08:04.441 }, 00:08:04.441 { 00:08:04.441 "name": "BaseBdev2", 00:08:04.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.441 "is_configured": false, 00:08:04.441 "data_offset": 0, 00:08:04.441 "data_size": 0 00:08:04.441 }, 00:08:04.441 { 00:08:04.441 "name": "BaseBdev3", 00:08:04.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.441 "is_configured": false, 00:08:04.441 "data_offset": 0, 00:08:04.441 "data_size": 0 00:08:04.441 } 00:08:04.441 ] 00:08:04.441 }' 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.441 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.010 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:05.010 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.010 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.010 [2024-10-15 01:09:17.542387] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:05.010 [2024-10-15 01:09:17.542427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:05.010 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.010 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:05.010 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.010 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.010 [2024-10-15 01:09:17.554404] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.010 [2024-10-15 01:09:17.554446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.010 [2024-10-15 01:09:17.554454] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.010 [2024-10-15 01:09:17.554463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.010 [2024-10-15 01:09:17.554469] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:05.010 [2024-10-15 01:09:17.554478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:05.010 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.010 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.011 [2024-10-15 01:09:17.575301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.011 BaseBdev1 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.011 [ 00:08:05.011 { 00:08:05.011 "name": "BaseBdev1", 00:08:05.011 "aliases": [ 00:08:05.011 "80789c65-2fda-4f0e-b864-e31195983b80" 00:08:05.011 ], 00:08:05.011 "product_name": "Malloc disk", 00:08:05.011 "block_size": 512, 00:08:05.011 "num_blocks": 65536, 00:08:05.011 "uuid": "80789c65-2fda-4f0e-b864-e31195983b80", 00:08:05.011 "assigned_rate_limits": { 00:08:05.011 "rw_ios_per_sec": 0, 00:08:05.011 "rw_mbytes_per_sec": 0, 00:08:05.011 "r_mbytes_per_sec": 0, 00:08:05.011 "w_mbytes_per_sec": 0 00:08:05.011 }, 00:08:05.011 "claimed": true, 00:08:05.011 "claim_type": "exclusive_write", 00:08:05.011 "zoned": false, 00:08:05.011 "supported_io_types": { 00:08:05.011 "read": true, 00:08:05.011 "write": true, 00:08:05.011 "unmap": true, 00:08:05.011 "flush": true, 00:08:05.011 "reset": true, 00:08:05.011 "nvme_admin": false, 00:08:05.011 "nvme_io": false, 00:08:05.011 "nvme_io_md": false, 00:08:05.011 "write_zeroes": true, 00:08:05.011 "zcopy": true, 00:08:05.011 "get_zone_info": false, 00:08:05.011 "zone_management": false, 00:08:05.011 "zone_append": false, 00:08:05.011 "compare": false, 00:08:05.011 "compare_and_write": false, 00:08:05.011 "abort": true, 00:08:05.011 "seek_hole": false, 00:08:05.011 "seek_data": false, 00:08:05.011 "copy": true, 00:08:05.011 "nvme_iov_md": false 00:08:05.011 }, 00:08:05.011 "memory_domains": [ 00:08:05.011 { 00:08:05.011 "dma_device_id": "system", 00:08:05.011 "dma_device_type": 1 00:08:05.011 }, 00:08:05.011 { 00:08:05.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.011 "dma_device_type": 2 00:08:05.011 } 00:08:05.011 ], 00:08:05.011 "driver_specific": {} 00:08:05.011 } 00:08:05.011 ] 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.011 "name": "Existed_Raid", 00:08:05.011 "uuid": "3bfe387f-1dc3-40e8-97f6-fbb935551477", 00:08:05.011 "strip_size_kb": 64, 00:08:05.011 "state": "configuring", 00:08:05.011 "raid_level": "raid0", 00:08:05.011 "superblock": true, 00:08:05.011 "num_base_bdevs": 3, 00:08:05.011 "num_base_bdevs_discovered": 1, 00:08:05.011 "num_base_bdevs_operational": 3, 00:08:05.011 "base_bdevs_list": [ 00:08:05.011 { 00:08:05.011 "name": "BaseBdev1", 00:08:05.011 "uuid": "80789c65-2fda-4f0e-b864-e31195983b80", 00:08:05.011 "is_configured": true, 00:08:05.011 "data_offset": 2048, 00:08:05.011 "data_size": 63488 00:08:05.011 }, 00:08:05.011 { 00:08:05.011 "name": "BaseBdev2", 00:08:05.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.011 "is_configured": false, 00:08:05.011 "data_offset": 0, 00:08:05.011 "data_size": 0 00:08:05.011 }, 00:08:05.011 { 00:08:05.011 "name": "BaseBdev3", 00:08:05.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.011 "is_configured": false, 00:08:05.011 "data_offset": 0, 00:08:05.011 "data_size": 0 00:08:05.011 } 00:08:05.011 ] 00:08:05.011 }' 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.011 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.579 01:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:05.579 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.579 01:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.579 [2024-10-15 01:09:18.002664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:05.579 [2024-10-15 01:09:18.002716] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:05.579 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.580 [2024-10-15 01:09:18.014722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.580 [2024-10-15 01:09:18.016633] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.580 [2024-10-15 01:09:18.016673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.580 [2024-10-15 01:09:18.016683] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:05.580 [2024-10-15 01:09:18.016692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.580 "name": "Existed_Raid", 00:08:05.580 "uuid": "6f493839-c036-4d68-aaf8-4fd7dc4b8864", 00:08:05.580 "strip_size_kb": 64, 00:08:05.580 "state": "configuring", 00:08:05.580 "raid_level": "raid0", 00:08:05.580 "superblock": true, 00:08:05.580 "num_base_bdevs": 3, 00:08:05.580 "num_base_bdevs_discovered": 1, 00:08:05.580 "num_base_bdevs_operational": 3, 00:08:05.580 "base_bdevs_list": [ 00:08:05.580 { 00:08:05.580 "name": "BaseBdev1", 00:08:05.580 "uuid": "80789c65-2fda-4f0e-b864-e31195983b80", 00:08:05.580 "is_configured": true, 00:08:05.580 "data_offset": 2048, 00:08:05.580 "data_size": 63488 00:08:05.580 }, 00:08:05.580 { 00:08:05.580 "name": "BaseBdev2", 00:08:05.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.580 "is_configured": false, 00:08:05.580 "data_offset": 0, 00:08:05.580 "data_size": 0 00:08:05.580 }, 00:08:05.580 { 00:08:05.580 "name": "BaseBdev3", 00:08:05.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.580 "is_configured": false, 00:08:05.580 "data_offset": 0, 00:08:05.580 "data_size": 0 00:08:05.580 } 00:08:05.580 ] 00:08:05.580 }' 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.580 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.840 [2024-10-15 01:09:18.452899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:05.840 BaseBdev2 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.840 [ 00:08:05.840 { 00:08:05.840 "name": "BaseBdev2", 00:08:05.840 "aliases": [ 00:08:05.840 "3505d137-ff8b-42cf-8bc5-256d47bf487e" 00:08:05.840 ], 00:08:05.840 "product_name": "Malloc disk", 00:08:05.840 "block_size": 512, 00:08:05.840 "num_blocks": 65536, 00:08:05.840 "uuid": "3505d137-ff8b-42cf-8bc5-256d47bf487e", 00:08:05.840 "assigned_rate_limits": { 00:08:05.840 "rw_ios_per_sec": 0, 00:08:05.840 "rw_mbytes_per_sec": 0, 00:08:05.840 "r_mbytes_per_sec": 0, 00:08:05.840 "w_mbytes_per_sec": 0 00:08:05.840 }, 00:08:05.840 "claimed": true, 00:08:05.840 "claim_type": "exclusive_write", 00:08:05.840 "zoned": false, 00:08:05.840 "supported_io_types": { 00:08:05.840 "read": true, 00:08:05.840 "write": true, 00:08:05.840 "unmap": true, 00:08:05.840 "flush": true, 00:08:05.840 "reset": true, 00:08:05.840 "nvme_admin": false, 00:08:05.840 "nvme_io": false, 00:08:05.840 "nvme_io_md": false, 00:08:05.840 "write_zeroes": true, 00:08:05.840 "zcopy": true, 00:08:05.840 "get_zone_info": false, 00:08:05.840 "zone_management": false, 00:08:05.840 "zone_append": false, 00:08:05.840 "compare": false, 00:08:05.840 "compare_and_write": false, 00:08:05.840 "abort": true, 00:08:05.840 "seek_hole": false, 00:08:05.840 "seek_data": false, 00:08:05.840 "copy": true, 00:08:05.840 "nvme_iov_md": false 00:08:05.840 }, 00:08:05.840 "memory_domains": [ 00:08:05.840 { 00:08:05.840 "dma_device_id": "system", 00:08:05.840 "dma_device_type": 1 00:08:05.840 }, 00:08:05.840 { 00:08:05.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.840 "dma_device_type": 2 00:08:05.840 } 00:08:05.840 ], 00:08:05.840 "driver_specific": {} 00:08:05.840 } 00:08:05.840 ] 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.840 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.840 "name": "Existed_Raid", 00:08:05.840 "uuid": "6f493839-c036-4d68-aaf8-4fd7dc4b8864", 00:08:05.840 "strip_size_kb": 64, 00:08:05.840 "state": "configuring", 00:08:05.840 "raid_level": "raid0", 00:08:05.840 "superblock": true, 00:08:05.840 "num_base_bdevs": 3, 00:08:05.840 "num_base_bdevs_discovered": 2, 00:08:05.840 "num_base_bdevs_operational": 3, 00:08:05.841 "base_bdevs_list": [ 00:08:05.841 { 00:08:05.841 "name": "BaseBdev1", 00:08:05.841 "uuid": "80789c65-2fda-4f0e-b864-e31195983b80", 00:08:05.841 "is_configured": true, 00:08:05.841 "data_offset": 2048, 00:08:05.841 "data_size": 63488 00:08:05.841 }, 00:08:05.841 { 00:08:05.841 "name": "BaseBdev2", 00:08:05.841 "uuid": "3505d137-ff8b-42cf-8bc5-256d47bf487e", 00:08:05.841 "is_configured": true, 00:08:05.841 "data_offset": 2048, 00:08:05.841 "data_size": 63488 00:08:05.841 }, 00:08:05.841 { 00:08:05.841 "name": "BaseBdev3", 00:08:05.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.841 "is_configured": false, 00:08:05.841 "data_offset": 0, 00:08:05.841 "data_size": 0 00:08:05.841 } 00:08:05.841 ] 00:08:05.841 }' 00:08:05.841 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.841 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.411 [2024-10-15 01:09:18.938428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:06.411 [2024-10-15 01:09:18.938971] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:06.411 [2024-10-15 01:09:18.939050] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:06.411 BaseBdev3 00:08:06.411 [2024-10-15 01:09:18.940112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.411 [2024-10-15 01:09:18.940659] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:06.411 [2024-10-15 01:09:18.940720] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:06.411 [2024-10-15 01:09:18.941134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.411 [ 00:08:06.411 { 00:08:06.411 "name": "BaseBdev3", 00:08:06.411 "aliases": [ 00:08:06.411 "de8f0aeb-4044-4f23-b31a-2d55d088b247" 00:08:06.411 ], 00:08:06.411 "product_name": "Malloc disk", 00:08:06.411 "block_size": 512, 00:08:06.411 "num_blocks": 65536, 00:08:06.411 "uuid": "de8f0aeb-4044-4f23-b31a-2d55d088b247", 00:08:06.411 "assigned_rate_limits": { 00:08:06.411 "rw_ios_per_sec": 0, 00:08:06.411 "rw_mbytes_per_sec": 0, 00:08:06.411 "r_mbytes_per_sec": 0, 00:08:06.411 "w_mbytes_per_sec": 0 00:08:06.411 }, 00:08:06.411 "claimed": true, 00:08:06.411 "claim_type": "exclusive_write", 00:08:06.411 "zoned": false, 00:08:06.411 "supported_io_types": { 00:08:06.411 "read": true, 00:08:06.411 "write": true, 00:08:06.411 "unmap": true, 00:08:06.411 "flush": true, 00:08:06.411 "reset": true, 00:08:06.411 "nvme_admin": false, 00:08:06.411 "nvme_io": false, 00:08:06.411 "nvme_io_md": false, 00:08:06.411 "write_zeroes": true, 00:08:06.411 "zcopy": true, 00:08:06.411 "get_zone_info": false, 00:08:06.411 "zone_management": false, 00:08:06.411 "zone_append": false, 00:08:06.411 "compare": false, 00:08:06.411 "compare_and_write": false, 00:08:06.411 "abort": true, 00:08:06.411 "seek_hole": false, 00:08:06.411 "seek_data": false, 00:08:06.411 "copy": true, 00:08:06.411 "nvme_iov_md": false 00:08:06.411 }, 00:08:06.411 "memory_domains": [ 00:08:06.411 { 00:08:06.411 "dma_device_id": "system", 00:08:06.411 "dma_device_type": 1 00:08:06.411 }, 00:08:06.411 { 00:08:06.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.411 "dma_device_type": 2 00:08:06.411 } 00:08:06.411 ], 00:08:06.411 "driver_specific": {} 00:08:06.411 } 00:08:06.411 ] 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.411 01:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.411 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.411 "name": "Existed_Raid", 00:08:06.412 "uuid": "6f493839-c036-4d68-aaf8-4fd7dc4b8864", 00:08:06.412 "strip_size_kb": 64, 00:08:06.412 "state": "online", 00:08:06.412 "raid_level": "raid0", 00:08:06.412 "superblock": true, 00:08:06.412 "num_base_bdevs": 3, 00:08:06.412 "num_base_bdevs_discovered": 3, 00:08:06.412 "num_base_bdevs_operational": 3, 00:08:06.412 "base_bdevs_list": [ 00:08:06.412 { 00:08:06.412 "name": "BaseBdev1", 00:08:06.412 "uuid": "80789c65-2fda-4f0e-b864-e31195983b80", 00:08:06.412 "is_configured": true, 00:08:06.412 "data_offset": 2048, 00:08:06.412 "data_size": 63488 00:08:06.412 }, 00:08:06.412 { 00:08:06.412 "name": "BaseBdev2", 00:08:06.412 "uuid": "3505d137-ff8b-42cf-8bc5-256d47bf487e", 00:08:06.412 "is_configured": true, 00:08:06.412 "data_offset": 2048, 00:08:06.412 "data_size": 63488 00:08:06.412 }, 00:08:06.412 { 00:08:06.412 "name": "BaseBdev3", 00:08:06.412 "uuid": "de8f0aeb-4044-4f23-b31a-2d55d088b247", 00:08:06.412 "is_configured": true, 00:08:06.412 "data_offset": 2048, 00:08:06.412 "data_size": 63488 00:08:06.412 } 00:08:06.412 ] 00:08:06.412 }' 00:08:06.412 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.412 01:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.984 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:06.984 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:06.984 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:06.984 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:06.984 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:06.984 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:06.984 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:06.984 01:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.984 01:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.984 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:06.984 [2024-10-15 01:09:19.445783] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:06.984 01:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.984 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:06.984 "name": "Existed_Raid", 00:08:06.984 "aliases": [ 00:08:06.984 "6f493839-c036-4d68-aaf8-4fd7dc4b8864" 00:08:06.984 ], 00:08:06.984 "product_name": "Raid Volume", 00:08:06.984 "block_size": 512, 00:08:06.984 "num_blocks": 190464, 00:08:06.984 "uuid": "6f493839-c036-4d68-aaf8-4fd7dc4b8864", 00:08:06.984 "assigned_rate_limits": { 00:08:06.984 "rw_ios_per_sec": 0, 00:08:06.984 "rw_mbytes_per_sec": 0, 00:08:06.984 "r_mbytes_per_sec": 0, 00:08:06.984 "w_mbytes_per_sec": 0 00:08:06.984 }, 00:08:06.984 "claimed": false, 00:08:06.984 "zoned": false, 00:08:06.984 "supported_io_types": { 00:08:06.984 "read": true, 00:08:06.984 "write": true, 00:08:06.984 "unmap": true, 00:08:06.984 "flush": true, 00:08:06.984 "reset": true, 00:08:06.984 "nvme_admin": false, 00:08:06.984 "nvme_io": false, 00:08:06.984 "nvme_io_md": false, 00:08:06.984 "write_zeroes": true, 00:08:06.984 "zcopy": false, 00:08:06.984 "get_zone_info": false, 00:08:06.984 "zone_management": false, 00:08:06.984 "zone_append": false, 00:08:06.984 "compare": false, 00:08:06.984 "compare_and_write": false, 00:08:06.984 "abort": false, 00:08:06.984 "seek_hole": false, 00:08:06.984 "seek_data": false, 00:08:06.984 "copy": false, 00:08:06.984 "nvme_iov_md": false 00:08:06.984 }, 00:08:06.984 "memory_domains": [ 00:08:06.984 { 00:08:06.984 "dma_device_id": "system", 00:08:06.984 "dma_device_type": 1 00:08:06.984 }, 00:08:06.984 { 00:08:06.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.984 "dma_device_type": 2 00:08:06.984 }, 00:08:06.984 { 00:08:06.984 "dma_device_id": "system", 00:08:06.984 "dma_device_type": 1 00:08:06.984 }, 00:08:06.984 { 00:08:06.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.984 "dma_device_type": 2 00:08:06.984 }, 00:08:06.984 { 00:08:06.984 "dma_device_id": "system", 00:08:06.984 "dma_device_type": 1 00:08:06.984 }, 00:08:06.984 { 00:08:06.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.984 "dma_device_type": 2 00:08:06.984 } 00:08:06.984 ], 00:08:06.984 "driver_specific": { 00:08:06.984 "raid": { 00:08:06.984 "uuid": "6f493839-c036-4d68-aaf8-4fd7dc4b8864", 00:08:06.984 "strip_size_kb": 64, 00:08:06.984 "state": "online", 00:08:06.984 "raid_level": "raid0", 00:08:06.984 "superblock": true, 00:08:06.984 "num_base_bdevs": 3, 00:08:06.984 "num_base_bdevs_discovered": 3, 00:08:06.984 "num_base_bdevs_operational": 3, 00:08:06.984 "base_bdevs_list": [ 00:08:06.984 { 00:08:06.984 "name": "BaseBdev1", 00:08:06.984 "uuid": "80789c65-2fda-4f0e-b864-e31195983b80", 00:08:06.984 "is_configured": true, 00:08:06.984 "data_offset": 2048, 00:08:06.984 "data_size": 63488 00:08:06.984 }, 00:08:06.984 { 00:08:06.984 "name": "BaseBdev2", 00:08:06.984 "uuid": "3505d137-ff8b-42cf-8bc5-256d47bf487e", 00:08:06.984 "is_configured": true, 00:08:06.984 "data_offset": 2048, 00:08:06.984 "data_size": 63488 00:08:06.984 }, 00:08:06.984 { 00:08:06.984 "name": "BaseBdev3", 00:08:06.984 "uuid": "de8f0aeb-4044-4f23-b31a-2d55d088b247", 00:08:06.984 "is_configured": true, 00:08:06.984 "data_offset": 2048, 00:08:06.984 "data_size": 63488 00:08:06.984 } 00:08:06.984 ] 00:08:06.984 } 00:08:06.984 } 00:08:06.984 }' 00:08:06.984 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:06.984 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:06.984 BaseBdev2 00:08:06.984 BaseBdev3' 00:08:06.984 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.984 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:06.984 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.984 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.984 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:06.984 01:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.984 01:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.985 [2024-10-15 01:09:19.649164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:06.985 [2024-10-15 01:09:19.649203] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.985 [2024-10-15 01:09:19.649269] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.985 01:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.251 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.251 "name": "Existed_Raid", 00:08:07.251 "uuid": "6f493839-c036-4d68-aaf8-4fd7dc4b8864", 00:08:07.251 "strip_size_kb": 64, 00:08:07.251 "state": "offline", 00:08:07.251 "raid_level": "raid0", 00:08:07.251 "superblock": true, 00:08:07.251 "num_base_bdevs": 3, 00:08:07.251 "num_base_bdevs_discovered": 2, 00:08:07.251 "num_base_bdevs_operational": 2, 00:08:07.251 "base_bdevs_list": [ 00:08:07.251 { 00:08:07.251 "name": null, 00:08:07.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.251 "is_configured": false, 00:08:07.251 "data_offset": 0, 00:08:07.251 "data_size": 63488 00:08:07.251 }, 00:08:07.251 { 00:08:07.251 "name": "BaseBdev2", 00:08:07.251 "uuid": "3505d137-ff8b-42cf-8bc5-256d47bf487e", 00:08:07.251 "is_configured": true, 00:08:07.251 "data_offset": 2048, 00:08:07.251 "data_size": 63488 00:08:07.251 }, 00:08:07.251 { 00:08:07.251 "name": "BaseBdev3", 00:08:07.251 "uuid": "de8f0aeb-4044-4f23-b31a-2d55d088b247", 00:08:07.251 "is_configured": true, 00:08:07.251 "data_offset": 2048, 00:08:07.251 "data_size": 63488 00:08:07.251 } 00:08:07.251 ] 00:08:07.251 }' 00:08:07.251 01:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.251 01:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.510 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:07.510 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:07.510 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:07.510 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.510 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.510 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.510 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.510 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:07.510 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:07.510 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:07.510 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.510 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.510 [2024-10-15 01:09:20.099609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:07.510 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.510 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:07.510 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:07.510 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.510 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.510 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:07.510 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.510 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.511 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:07.511 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:07.511 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:07.511 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.511 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.511 [2024-10-15 01:09:20.170868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:07.511 [2024-10-15 01:09:20.170920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:07.511 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.511 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:07.511 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:07.511 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.511 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:07.511 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.511 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.511 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.770 BaseBdev2 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.770 [ 00:08:07.770 { 00:08:07.770 "name": "BaseBdev2", 00:08:07.770 "aliases": [ 00:08:07.770 "b82646b4-a24f-4697-b2b4-ec726ae6a718" 00:08:07.770 ], 00:08:07.770 "product_name": "Malloc disk", 00:08:07.770 "block_size": 512, 00:08:07.770 "num_blocks": 65536, 00:08:07.770 "uuid": "b82646b4-a24f-4697-b2b4-ec726ae6a718", 00:08:07.770 "assigned_rate_limits": { 00:08:07.770 "rw_ios_per_sec": 0, 00:08:07.770 "rw_mbytes_per_sec": 0, 00:08:07.770 "r_mbytes_per_sec": 0, 00:08:07.770 "w_mbytes_per_sec": 0 00:08:07.770 }, 00:08:07.770 "claimed": false, 00:08:07.770 "zoned": false, 00:08:07.770 "supported_io_types": { 00:08:07.770 "read": true, 00:08:07.770 "write": true, 00:08:07.770 "unmap": true, 00:08:07.770 "flush": true, 00:08:07.770 "reset": true, 00:08:07.770 "nvme_admin": false, 00:08:07.770 "nvme_io": false, 00:08:07.770 "nvme_io_md": false, 00:08:07.770 "write_zeroes": true, 00:08:07.770 "zcopy": true, 00:08:07.770 "get_zone_info": false, 00:08:07.770 "zone_management": false, 00:08:07.770 "zone_append": false, 00:08:07.770 "compare": false, 00:08:07.770 "compare_and_write": false, 00:08:07.770 "abort": true, 00:08:07.770 "seek_hole": false, 00:08:07.770 "seek_data": false, 00:08:07.770 "copy": true, 00:08:07.770 "nvme_iov_md": false 00:08:07.770 }, 00:08:07.770 "memory_domains": [ 00:08:07.770 { 00:08:07.770 "dma_device_id": "system", 00:08:07.770 "dma_device_type": 1 00:08:07.770 }, 00:08:07.770 { 00:08:07.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.770 "dma_device_type": 2 00:08:07.770 } 00:08:07.770 ], 00:08:07.770 "driver_specific": {} 00:08:07.770 } 00:08:07.770 ] 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.770 BaseBdev3 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.770 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.770 [ 00:08:07.770 { 00:08:07.770 "name": "BaseBdev3", 00:08:07.770 "aliases": [ 00:08:07.770 "9017ea2a-9db1-4fa5-bc5c-792ac28836a6" 00:08:07.770 ], 00:08:07.770 "product_name": "Malloc disk", 00:08:07.770 "block_size": 512, 00:08:07.770 "num_blocks": 65536, 00:08:07.770 "uuid": "9017ea2a-9db1-4fa5-bc5c-792ac28836a6", 00:08:07.770 "assigned_rate_limits": { 00:08:07.770 "rw_ios_per_sec": 0, 00:08:07.770 "rw_mbytes_per_sec": 0, 00:08:07.770 "r_mbytes_per_sec": 0, 00:08:07.771 "w_mbytes_per_sec": 0 00:08:07.771 }, 00:08:07.771 "claimed": false, 00:08:07.771 "zoned": false, 00:08:07.771 "supported_io_types": { 00:08:07.771 "read": true, 00:08:07.771 "write": true, 00:08:07.771 "unmap": true, 00:08:07.771 "flush": true, 00:08:07.771 "reset": true, 00:08:07.771 "nvme_admin": false, 00:08:07.771 "nvme_io": false, 00:08:07.771 "nvme_io_md": false, 00:08:07.771 "write_zeroes": true, 00:08:07.771 "zcopy": true, 00:08:07.771 "get_zone_info": false, 00:08:07.771 "zone_management": false, 00:08:07.771 "zone_append": false, 00:08:07.771 "compare": false, 00:08:07.771 "compare_and_write": false, 00:08:07.771 "abort": true, 00:08:07.771 "seek_hole": false, 00:08:07.771 "seek_data": false, 00:08:07.771 "copy": true, 00:08:07.771 "nvme_iov_md": false 00:08:07.771 }, 00:08:07.771 "memory_domains": [ 00:08:07.771 { 00:08:07.771 "dma_device_id": "system", 00:08:07.771 "dma_device_type": 1 00:08:07.771 }, 00:08:07.771 { 00:08:07.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.771 "dma_device_type": 2 00:08:07.771 } 00:08:07.771 ], 00:08:07.771 "driver_specific": {} 00:08:07.771 } 00:08:07.771 ] 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.771 [2024-10-15 01:09:20.343636] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:07.771 [2024-10-15 01:09:20.343680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:07.771 [2024-10-15 01:09:20.343702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.771 [2024-10-15 01:09:20.345504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.771 "name": "Existed_Raid", 00:08:07.771 "uuid": "d7e322aa-e125-4580-8bf1-4dd655366198", 00:08:07.771 "strip_size_kb": 64, 00:08:07.771 "state": "configuring", 00:08:07.771 "raid_level": "raid0", 00:08:07.771 "superblock": true, 00:08:07.771 "num_base_bdevs": 3, 00:08:07.771 "num_base_bdevs_discovered": 2, 00:08:07.771 "num_base_bdevs_operational": 3, 00:08:07.771 "base_bdevs_list": [ 00:08:07.771 { 00:08:07.771 "name": "BaseBdev1", 00:08:07.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.771 "is_configured": false, 00:08:07.771 "data_offset": 0, 00:08:07.771 "data_size": 0 00:08:07.771 }, 00:08:07.771 { 00:08:07.771 "name": "BaseBdev2", 00:08:07.771 "uuid": "b82646b4-a24f-4697-b2b4-ec726ae6a718", 00:08:07.771 "is_configured": true, 00:08:07.771 "data_offset": 2048, 00:08:07.771 "data_size": 63488 00:08:07.771 }, 00:08:07.771 { 00:08:07.771 "name": "BaseBdev3", 00:08:07.771 "uuid": "9017ea2a-9db1-4fa5-bc5c-792ac28836a6", 00:08:07.771 "is_configured": true, 00:08:07.771 "data_offset": 2048, 00:08:07.771 "data_size": 63488 00:08:07.771 } 00:08:07.771 ] 00:08:07.771 }' 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.771 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.340 [2024-10-15 01:09:20.778887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.340 "name": "Existed_Raid", 00:08:08.340 "uuid": "d7e322aa-e125-4580-8bf1-4dd655366198", 00:08:08.340 "strip_size_kb": 64, 00:08:08.340 "state": "configuring", 00:08:08.340 "raid_level": "raid0", 00:08:08.340 "superblock": true, 00:08:08.340 "num_base_bdevs": 3, 00:08:08.340 "num_base_bdevs_discovered": 1, 00:08:08.340 "num_base_bdevs_operational": 3, 00:08:08.340 "base_bdevs_list": [ 00:08:08.340 { 00:08:08.340 "name": "BaseBdev1", 00:08:08.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.340 "is_configured": false, 00:08:08.340 "data_offset": 0, 00:08:08.340 "data_size": 0 00:08:08.340 }, 00:08:08.340 { 00:08:08.340 "name": null, 00:08:08.340 "uuid": "b82646b4-a24f-4697-b2b4-ec726ae6a718", 00:08:08.340 "is_configured": false, 00:08:08.340 "data_offset": 0, 00:08:08.340 "data_size": 63488 00:08:08.340 }, 00:08:08.340 { 00:08:08.340 "name": "BaseBdev3", 00:08:08.340 "uuid": "9017ea2a-9db1-4fa5-bc5c-792ac28836a6", 00:08:08.340 "is_configured": true, 00:08:08.340 "data_offset": 2048, 00:08:08.340 "data_size": 63488 00:08:08.340 } 00:08:08.340 ] 00:08:08.340 }' 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.340 01:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.600 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.600 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.600 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.600 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:08.600 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.600 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:08.600 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:08.600 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.600 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.600 [2024-10-15 01:09:21.277170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.600 BaseBdev1 00:08:08.600 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.600 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:08.600 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:08.600 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:08.600 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:08.600 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:08.600 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:08.600 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:08.600 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.600 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.600 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.601 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:08.601 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.601 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.601 [ 00:08:08.601 { 00:08:08.601 "name": "BaseBdev1", 00:08:08.601 "aliases": [ 00:08:08.601 "354b40cb-c158-4e53-bd76-7411f3bff5e3" 00:08:08.601 ], 00:08:08.601 "product_name": "Malloc disk", 00:08:08.601 "block_size": 512, 00:08:08.601 "num_blocks": 65536, 00:08:08.601 "uuid": "354b40cb-c158-4e53-bd76-7411f3bff5e3", 00:08:08.601 "assigned_rate_limits": { 00:08:08.601 "rw_ios_per_sec": 0, 00:08:08.601 "rw_mbytes_per_sec": 0, 00:08:08.601 "r_mbytes_per_sec": 0, 00:08:08.601 "w_mbytes_per_sec": 0 00:08:08.601 }, 00:08:08.601 "claimed": true, 00:08:08.601 "claim_type": "exclusive_write", 00:08:08.601 "zoned": false, 00:08:08.601 "supported_io_types": { 00:08:08.601 "read": true, 00:08:08.601 "write": true, 00:08:08.601 "unmap": true, 00:08:08.601 "flush": true, 00:08:08.601 "reset": true, 00:08:08.601 "nvme_admin": false, 00:08:08.601 "nvme_io": false, 00:08:08.601 "nvme_io_md": false, 00:08:08.601 "write_zeroes": true, 00:08:08.601 "zcopy": true, 00:08:08.601 "get_zone_info": false, 00:08:08.601 "zone_management": false, 00:08:08.601 "zone_append": false, 00:08:08.601 "compare": false, 00:08:08.601 "compare_and_write": false, 00:08:08.601 "abort": true, 00:08:08.601 "seek_hole": false, 00:08:08.601 "seek_data": false, 00:08:08.601 "copy": true, 00:08:08.601 "nvme_iov_md": false 00:08:08.601 }, 00:08:08.601 "memory_domains": [ 00:08:08.601 { 00:08:08.601 "dma_device_id": "system", 00:08:08.601 "dma_device_type": 1 00:08:08.601 }, 00:08:08.601 { 00:08:08.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.601 "dma_device_type": 2 00:08:08.601 } 00:08:08.601 ], 00:08:08.601 "driver_specific": {} 00:08:08.601 } 00:08:08.601 ] 00:08:08.601 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.601 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:08.601 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:08.601 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.601 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.601 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.601 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.601 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.601 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.601 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.601 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.601 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.601 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.601 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.601 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.601 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.861 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.861 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.861 "name": "Existed_Raid", 00:08:08.861 "uuid": "d7e322aa-e125-4580-8bf1-4dd655366198", 00:08:08.861 "strip_size_kb": 64, 00:08:08.861 "state": "configuring", 00:08:08.861 "raid_level": "raid0", 00:08:08.861 "superblock": true, 00:08:08.861 "num_base_bdevs": 3, 00:08:08.861 "num_base_bdevs_discovered": 2, 00:08:08.861 "num_base_bdevs_operational": 3, 00:08:08.861 "base_bdevs_list": [ 00:08:08.861 { 00:08:08.861 "name": "BaseBdev1", 00:08:08.861 "uuid": "354b40cb-c158-4e53-bd76-7411f3bff5e3", 00:08:08.861 "is_configured": true, 00:08:08.861 "data_offset": 2048, 00:08:08.861 "data_size": 63488 00:08:08.861 }, 00:08:08.861 { 00:08:08.861 "name": null, 00:08:08.861 "uuid": "b82646b4-a24f-4697-b2b4-ec726ae6a718", 00:08:08.861 "is_configured": false, 00:08:08.861 "data_offset": 0, 00:08:08.861 "data_size": 63488 00:08:08.861 }, 00:08:08.861 { 00:08:08.861 "name": "BaseBdev3", 00:08:08.861 "uuid": "9017ea2a-9db1-4fa5-bc5c-792ac28836a6", 00:08:08.861 "is_configured": true, 00:08:08.861 "data_offset": 2048, 00:08:08.861 "data_size": 63488 00:08:08.861 } 00:08:08.861 ] 00:08:08.861 }' 00:08:08.861 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.861 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.121 [2024-10-15 01:09:21.780340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.121 "name": "Existed_Raid", 00:08:09.121 "uuid": "d7e322aa-e125-4580-8bf1-4dd655366198", 00:08:09.121 "strip_size_kb": 64, 00:08:09.121 "state": "configuring", 00:08:09.121 "raid_level": "raid0", 00:08:09.121 "superblock": true, 00:08:09.121 "num_base_bdevs": 3, 00:08:09.121 "num_base_bdevs_discovered": 1, 00:08:09.121 "num_base_bdevs_operational": 3, 00:08:09.121 "base_bdevs_list": [ 00:08:09.121 { 00:08:09.121 "name": "BaseBdev1", 00:08:09.121 "uuid": "354b40cb-c158-4e53-bd76-7411f3bff5e3", 00:08:09.121 "is_configured": true, 00:08:09.121 "data_offset": 2048, 00:08:09.121 "data_size": 63488 00:08:09.121 }, 00:08:09.121 { 00:08:09.121 "name": null, 00:08:09.121 "uuid": "b82646b4-a24f-4697-b2b4-ec726ae6a718", 00:08:09.121 "is_configured": false, 00:08:09.121 "data_offset": 0, 00:08:09.121 "data_size": 63488 00:08:09.121 }, 00:08:09.121 { 00:08:09.121 "name": null, 00:08:09.121 "uuid": "9017ea2a-9db1-4fa5-bc5c-792ac28836a6", 00:08:09.121 "is_configured": false, 00:08:09.121 "data_offset": 0, 00:08:09.121 "data_size": 63488 00:08:09.121 } 00:08:09.121 ] 00:08:09.121 }' 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.121 01:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.692 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.693 [2024-10-15 01:09:22.247583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.693 "name": "Existed_Raid", 00:08:09.693 "uuid": "d7e322aa-e125-4580-8bf1-4dd655366198", 00:08:09.693 "strip_size_kb": 64, 00:08:09.693 "state": "configuring", 00:08:09.693 "raid_level": "raid0", 00:08:09.693 "superblock": true, 00:08:09.693 "num_base_bdevs": 3, 00:08:09.693 "num_base_bdevs_discovered": 2, 00:08:09.693 "num_base_bdevs_operational": 3, 00:08:09.693 "base_bdevs_list": [ 00:08:09.693 { 00:08:09.693 "name": "BaseBdev1", 00:08:09.693 "uuid": "354b40cb-c158-4e53-bd76-7411f3bff5e3", 00:08:09.693 "is_configured": true, 00:08:09.693 "data_offset": 2048, 00:08:09.693 "data_size": 63488 00:08:09.693 }, 00:08:09.693 { 00:08:09.693 "name": null, 00:08:09.693 "uuid": "b82646b4-a24f-4697-b2b4-ec726ae6a718", 00:08:09.693 "is_configured": false, 00:08:09.693 "data_offset": 0, 00:08:09.693 "data_size": 63488 00:08:09.693 }, 00:08:09.693 { 00:08:09.693 "name": "BaseBdev3", 00:08:09.693 "uuid": "9017ea2a-9db1-4fa5-bc5c-792ac28836a6", 00:08:09.693 "is_configured": true, 00:08:09.693 "data_offset": 2048, 00:08:09.693 "data_size": 63488 00:08:09.693 } 00:08:09.693 ] 00:08:09.693 }' 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.693 01:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.970 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.970 01:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.970 01:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.970 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:09.970 01:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.241 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:10.241 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:10.241 01:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.241 01:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.241 [2024-10-15 01:09:22.726779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:10.241 01:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.241 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.241 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.241 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.241 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.241 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.241 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.241 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.241 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.241 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.241 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.241 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.241 01:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.241 01:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.241 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.241 01:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.241 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.241 "name": "Existed_Raid", 00:08:10.241 "uuid": "d7e322aa-e125-4580-8bf1-4dd655366198", 00:08:10.241 "strip_size_kb": 64, 00:08:10.241 "state": "configuring", 00:08:10.241 "raid_level": "raid0", 00:08:10.241 "superblock": true, 00:08:10.241 "num_base_bdevs": 3, 00:08:10.241 "num_base_bdevs_discovered": 1, 00:08:10.241 "num_base_bdevs_operational": 3, 00:08:10.241 "base_bdevs_list": [ 00:08:10.241 { 00:08:10.241 "name": null, 00:08:10.241 "uuid": "354b40cb-c158-4e53-bd76-7411f3bff5e3", 00:08:10.241 "is_configured": false, 00:08:10.241 "data_offset": 0, 00:08:10.241 "data_size": 63488 00:08:10.241 }, 00:08:10.241 { 00:08:10.241 "name": null, 00:08:10.241 "uuid": "b82646b4-a24f-4697-b2b4-ec726ae6a718", 00:08:10.242 "is_configured": false, 00:08:10.242 "data_offset": 0, 00:08:10.242 "data_size": 63488 00:08:10.242 }, 00:08:10.242 { 00:08:10.242 "name": "BaseBdev3", 00:08:10.242 "uuid": "9017ea2a-9db1-4fa5-bc5c-792ac28836a6", 00:08:10.242 "is_configured": true, 00:08:10.242 "data_offset": 2048, 00:08:10.242 "data_size": 63488 00:08:10.242 } 00:08:10.242 ] 00:08:10.242 }' 00:08:10.242 01:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.242 01:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.501 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:10.501 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.501 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.501 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.501 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.501 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:10.501 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:10.501 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.501 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.502 [2024-10-15 01:09:23.212462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:10.502 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.502 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.502 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.502 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.502 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.502 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.502 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.502 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.502 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.502 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.502 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.502 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.502 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.502 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.502 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.762 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.762 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.762 "name": "Existed_Raid", 00:08:10.762 "uuid": "d7e322aa-e125-4580-8bf1-4dd655366198", 00:08:10.762 "strip_size_kb": 64, 00:08:10.762 "state": "configuring", 00:08:10.762 "raid_level": "raid0", 00:08:10.762 "superblock": true, 00:08:10.762 "num_base_bdevs": 3, 00:08:10.762 "num_base_bdevs_discovered": 2, 00:08:10.762 "num_base_bdevs_operational": 3, 00:08:10.762 "base_bdevs_list": [ 00:08:10.762 { 00:08:10.762 "name": null, 00:08:10.762 "uuid": "354b40cb-c158-4e53-bd76-7411f3bff5e3", 00:08:10.762 "is_configured": false, 00:08:10.762 "data_offset": 0, 00:08:10.762 "data_size": 63488 00:08:10.762 }, 00:08:10.762 { 00:08:10.762 "name": "BaseBdev2", 00:08:10.762 "uuid": "b82646b4-a24f-4697-b2b4-ec726ae6a718", 00:08:10.762 "is_configured": true, 00:08:10.762 "data_offset": 2048, 00:08:10.762 "data_size": 63488 00:08:10.762 }, 00:08:10.762 { 00:08:10.762 "name": "BaseBdev3", 00:08:10.762 "uuid": "9017ea2a-9db1-4fa5-bc5c-792ac28836a6", 00:08:10.762 "is_configured": true, 00:08:10.762 "data_offset": 2048, 00:08:10.762 "data_size": 63488 00:08:10.762 } 00:08:10.762 ] 00:08:10.762 }' 00:08:10.762 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.762 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.022 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:11.022 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.022 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.022 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.022 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.022 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:11.022 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.022 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:11.022 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.022 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.022 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.022 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 354b40cb-c158-4e53-bd76-7411f3bff5e3 00:08:11.022 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.022 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.022 [2024-10-15 01:09:23.698695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:11.022 [2024-10-15 01:09:23.698861] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:11.022 [2024-10-15 01:09:23.698877] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:11.023 [2024-10-15 01:09:23.699106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:11.023 NewBaseBdev 00:08:11.023 [2024-10-15 01:09:23.699261] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:11.023 [2024-10-15 01:09:23.699279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:11.023 [2024-10-15 01:09:23.699406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.023 [ 00:08:11.023 { 00:08:11.023 "name": "NewBaseBdev", 00:08:11.023 "aliases": [ 00:08:11.023 "354b40cb-c158-4e53-bd76-7411f3bff5e3" 00:08:11.023 ], 00:08:11.023 "product_name": "Malloc disk", 00:08:11.023 "block_size": 512, 00:08:11.023 "num_blocks": 65536, 00:08:11.023 "uuid": "354b40cb-c158-4e53-bd76-7411f3bff5e3", 00:08:11.023 "assigned_rate_limits": { 00:08:11.023 "rw_ios_per_sec": 0, 00:08:11.023 "rw_mbytes_per_sec": 0, 00:08:11.023 "r_mbytes_per_sec": 0, 00:08:11.023 "w_mbytes_per_sec": 0 00:08:11.023 }, 00:08:11.023 "claimed": true, 00:08:11.023 "claim_type": "exclusive_write", 00:08:11.023 "zoned": false, 00:08:11.023 "supported_io_types": { 00:08:11.023 "read": true, 00:08:11.023 "write": true, 00:08:11.023 "unmap": true, 00:08:11.023 "flush": true, 00:08:11.023 "reset": true, 00:08:11.023 "nvme_admin": false, 00:08:11.023 "nvme_io": false, 00:08:11.023 "nvme_io_md": false, 00:08:11.023 "write_zeroes": true, 00:08:11.023 "zcopy": true, 00:08:11.023 "get_zone_info": false, 00:08:11.023 "zone_management": false, 00:08:11.023 "zone_append": false, 00:08:11.023 "compare": false, 00:08:11.023 "compare_and_write": false, 00:08:11.023 "abort": true, 00:08:11.023 "seek_hole": false, 00:08:11.023 "seek_data": false, 00:08:11.023 "copy": true, 00:08:11.023 "nvme_iov_md": false 00:08:11.023 }, 00:08:11.023 "memory_domains": [ 00:08:11.023 { 00:08:11.023 "dma_device_id": "system", 00:08:11.023 "dma_device_type": 1 00:08:11.023 }, 00:08:11.023 { 00:08:11.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.023 "dma_device_type": 2 00:08:11.023 } 00:08:11.023 ], 00:08:11.023 "driver_specific": {} 00:08:11.023 } 00:08:11.023 ] 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.023 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.283 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.283 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.283 "name": "Existed_Raid", 00:08:11.283 "uuid": "d7e322aa-e125-4580-8bf1-4dd655366198", 00:08:11.283 "strip_size_kb": 64, 00:08:11.283 "state": "online", 00:08:11.283 "raid_level": "raid0", 00:08:11.283 "superblock": true, 00:08:11.283 "num_base_bdevs": 3, 00:08:11.283 "num_base_bdevs_discovered": 3, 00:08:11.283 "num_base_bdevs_operational": 3, 00:08:11.283 "base_bdevs_list": [ 00:08:11.283 { 00:08:11.283 "name": "NewBaseBdev", 00:08:11.283 "uuid": "354b40cb-c158-4e53-bd76-7411f3bff5e3", 00:08:11.283 "is_configured": true, 00:08:11.283 "data_offset": 2048, 00:08:11.283 "data_size": 63488 00:08:11.283 }, 00:08:11.283 { 00:08:11.283 "name": "BaseBdev2", 00:08:11.283 "uuid": "b82646b4-a24f-4697-b2b4-ec726ae6a718", 00:08:11.283 "is_configured": true, 00:08:11.283 "data_offset": 2048, 00:08:11.283 "data_size": 63488 00:08:11.283 }, 00:08:11.283 { 00:08:11.283 "name": "BaseBdev3", 00:08:11.283 "uuid": "9017ea2a-9db1-4fa5-bc5c-792ac28836a6", 00:08:11.283 "is_configured": true, 00:08:11.283 "data_offset": 2048, 00:08:11.283 "data_size": 63488 00:08:11.283 } 00:08:11.283 ] 00:08:11.283 }' 00:08:11.283 01:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.283 01:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.543 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:11.543 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:11.543 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.543 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.543 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.543 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.543 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.543 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:11.544 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.544 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.544 [2024-10-15 01:09:24.146255] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.544 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.544 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:11.544 "name": "Existed_Raid", 00:08:11.544 "aliases": [ 00:08:11.544 "d7e322aa-e125-4580-8bf1-4dd655366198" 00:08:11.544 ], 00:08:11.544 "product_name": "Raid Volume", 00:08:11.544 "block_size": 512, 00:08:11.544 "num_blocks": 190464, 00:08:11.544 "uuid": "d7e322aa-e125-4580-8bf1-4dd655366198", 00:08:11.544 "assigned_rate_limits": { 00:08:11.544 "rw_ios_per_sec": 0, 00:08:11.544 "rw_mbytes_per_sec": 0, 00:08:11.544 "r_mbytes_per_sec": 0, 00:08:11.544 "w_mbytes_per_sec": 0 00:08:11.544 }, 00:08:11.544 "claimed": false, 00:08:11.544 "zoned": false, 00:08:11.544 "supported_io_types": { 00:08:11.544 "read": true, 00:08:11.544 "write": true, 00:08:11.544 "unmap": true, 00:08:11.544 "flush": true, 00:08:11.544 "reset": true, 00:08:11.544 "nvme_admin": false, 00:08:11.544 "nvme_io": false, 00:08:11.544 "nvme_io_md": false, 00:08:11.544 "write_zeroes": true, 00:08:11.544 "zcopy": false, 00:08:11.544 "get_zone_info": false, 00:08:11.544 "zone_management": false, 00:08:11.544 "zone_append": false, 00:08:11.544 "compare": false, 00:08:11.544 "compare_and_write": false, 00:08:11.544 "abort": false, 00:08:11.544 "seek_hole": false, 00:08:11.544 "seek_data": false, 00:08:11.544 "copy": false, 00:08:11.544 "nvme_iov_md": false 00:08:11.544 }, 00:08:11.544 "memory_domains": [ 00:08:11.544 { 00:08:11.544 "dma_device_id": "system", 00:08:11.544 "dma_device_type": 1 00:08:11.544 }, 00:08:11.544 { 00:08:11.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.544 "dma_device_type": 2 00:08:11.544 }, 00:08:11.544 { 00:08:11.544 "dma_device_id": "system", 00:08:11.544 "dma_device_type": 1 00:08:11.544 }, 00:08:11.544 { 00:08:11.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.544 "dma_device_type": 2 00:08:11.544 }, 00:08:11.544 { 00:08:11.544 "dma_device_id": "system", 00:08:11.544 "dma_device_type": 1 00:08:11.544 }, 00:08:11.544 { 00:08:11.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.544 "dma_device_type": 2 00:08:11.544 } 00:08:11.544 ], 00:08:11.544 "driver_specific": { 00:08:11.544 "raid": { 00:08:11.544 "uuid": "d7e322aa-e125-4580-8bf1-4dd655366198", 00:08:11.544 "strip_size_kb": 64, 00:08:11.544 "state": "online", 00:08:11.544 "raid_level": "raid0", 00:08:11.544 "superblock": true, 00:08:11.544 "num_base_bdevs": 3, 00:08:11.544 "num_base_bdevs_discovered": 3, 00:08:11.544 "num_base_bdevs_operational": 3, 00:08:11.544 "base_bdevs_list": [ 00:08:11.544 { 00:08:11.544 "name": "NewBaseBdev", 00:08:11.544 "uuid": "354b40cb-c158-4e53-bd76-7411f3bff5e3", 00:08:11.544 "is_configured": true, 00:08:11.544 "data_offset": 2048, 00:08:11.544 "data_size": 63488 00:08:11.544 }, 00:08:11.544 { 00:08:11.544 "name": "BaseBdev2", 00:08:11.544 "uuid": "b82646b4-a24f-4697-b2b4-ec726ae6a718", 00:08:11.544 "is_configured": true, 00:08:11.544 "data_offset": 2048, 00:08:11.544 "data_size": 63488 00:08:11.544 }, 00:08:11.544 { 00:08:11.544 "name": "BaseBdev3", 00:08:11.544 "uuid": "9017ea2a-9db1-4fa5-bc5c-792ac28836a6", 00:08:11.544 "is_configured": true, 00:08:11.544 "data_offset": 2048, 00:08:11.544 "data_size": 63488 00:08:11.544 } 00:08:11.544 ] 00:08:11.544 } 00:08:11.544 } 00:08:11.544 }' 00:08:11.544 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:11.544 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:11.544 BaseBdev2 00:08:11.544 BaseBdev3' 00:08:11.544 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.544 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:11.544 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.544 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.544 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:11.544 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.544 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.804 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.805 [2024-10-15 01:09:24.361585] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:11.805 [2024-10-15 01:09:24.361614] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:11.805 [2024-10-15 01:09:24.361710] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.805 [2024-10-15 01:09:24.361766] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:11.805 [2024-10-15 01:09:24.361787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75396 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75396 ']' 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 75396 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75396 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:11.805 killing process with pid 75396 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75396' 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 75396 00:08:11.805 [2024-10-15 01:09:24.411532] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:11.805 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 75396 00:08:11.805 [2024-10-15 01:09:24.442776] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.065 01:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:12.065 00:08:12.065 real 0m8.492s 00:08:12.065 user 0m14.583s 00:08:12.065 sys 0m1.651s 00:08:12.065 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.065 01:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.065 ************************************ 00:08:12.065 END TEST raid_state_function_test_sb 00:08:12.065 ************************************ 00:08:12.065 01:09:24 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:12.065 01:09:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:12.065 01:09:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.065 01:09:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.065 ************************************ 00:08:12.065 START TEST raid_superblock_test 00:08:12.065 ************************************ 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76000 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76000 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 76000 ']' 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.065 01:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.326 [2024-10-15 01:09:24.806107] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:08:12.326 [2024-10-15 01:09:24.806699] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76000 ] 00:08:12.326 [2024-10-15 01:09:24.950483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.326 [2024-10-15 01:09:24.977132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.326 [2024-10-15 01:09:25.019852] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.326 [2024-10-15 01:09:25.019890] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.264 malloc1 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.264 [2024-10-15 01:09:25.650416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:13.264 [2024-10-15 01:09:25.650474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.264 [2024-10-15 01:09:25.650495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:13.264 [2024-10-15 01:09:25.650505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.264 [2024-10-15 01:09:25.652595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.264 [2024-10-15 01:09:25.652633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:13.264 pt1 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.264 malloc2 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.264 [2024-10-15 01:09:25.679300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:13.264 [2024-10-15 01:09:25.679411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.264 [2024-10-15 01:09:25.679464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:13.264 [2024-10-15 01:09:25.679498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.264 [2024-10-15 01:09:25.681584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.264 [2024-10-15 01:09:25.681653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:13.264 pt2 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.264 malloc3 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.264 [2024-10-15 01:09:25.707905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:13.264 [2024-10-15 01:09:25.708012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.264 [2024-10-15 01:09:25.708047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:13.264 [2024-10-15 01:09:25.708080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.264 [2024-10-15 01:09:25.710140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.264 [2024-10-15 01:09:25.710222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:13.264 pt3 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.264 [2024-10-15 01:09:25.719941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:13.264 [2024-10-15 01:09:25.721794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:13.264 [2024-10-15 01:09:25.721888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:13.264 [2024-10-15 01:09:25.722047] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:13.264 [2024-10-15 01:09:25.722090] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:13.264 [2024-10-15 01:09:25.722359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:13.264 [2024-10-15 01:09:25.722534] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:13.264 [2024-10-15 01:09:25.722578] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:13.264 [2024-10-15 01:09:25.722732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.264 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.265 "name": "raid_bdev1", 00:08:13.265 "uuid": "fbc31656-ab3e-41ba-812b-6926b2e769ba", 00:08:13.265 "strip_size_kb": 64, 00:08:13.265 "state": "online", 00:08:13.265 "raid_level": "raid0", 00:08:13.265 "superblock": true, 00:08:13.265 "num_base_bdevs": 3, 00:08:13.265 "num_base_bdevs_discovered": 3, 00:08:13.265 "num_base_bdevs_operational": 3, 00:08:13.265 "base_bdevs_list": [ 00:08:13.265 { 00:08:13.265 "name": "pt1", 00:08:13.265 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.265 "is_configured": true, 00:08:13.265 "data_offset": 2048, 00:08:13.265 "data_size": 63488 00:08:13.265 }, 00:08:13.265 { 00:08:13.265 "name": "pt2", 00:08:13.265 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.265 "is_configured": true, 00:08:13.265 "data_offset": 2048, 00:08:13.265 "data_size": 63488 00:08:13.265 }, 00:08:13.265 { 00:08:13.265 "name": "pt3", 00:08:13.265 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:13.265 "is_configured": true, 00:08:13.265 "data_offset": 2048, 00:08:13.265 "data_size": 63488 00:08:13.265 } 00:08:13.265 ] 00:08:13.265 }' 00:08:13.265 01:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.265 01:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.524 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:13.524 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:13.524 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.524 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.524 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.524 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:13.524 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.524 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:13.524 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.524 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.525 [2024-10-15 01:09:26.151582] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.525 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.525 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.525 "name": "raid_bdev1", 00:08:13.525 "aliases": [ 00:08:13.525 "fbc31656-ab3e-41ba-812b-6926b2e769ba" 00:08:13.525 ], 00:08:13.525 "product_name": "Raid Volume", 00:08:13.525 "block_size": 512, 00:08:13.525 "num_blocks": 190464, 00:08:13.525 "uuid": "fbc31656-ab3e-41ba-812b-6926b2e769ba", 00:08:13.525 "assigned_rate_limits": { 00:08:13.525 "rw_ios_per_sec": 0, 00:08:13.525 "rw_mbytes_per_sec": 0, 00:08:13.525 "r_mbytes_per_sec": 0, 00:08:13.525 "w_mbytes_per_sec": 0 00:08:13.525 }, 00:08:13.525 "claimed": false, 00:08:13.525 "zoned": false, 00:08:13.525 "supported_io_types": { 00:08:13.525 "read": true, 00:08:13.525 "write": true, 00:08:13.525 "unmap": true, 00:08:13.525 "flush": true, 00:08:13.525 "reset": true, 00:08:13.525 "nvme_admin": false, 00:08:13.525 "nvme_io": false, 00:08:13.525 "nvme_io_md": false, 00:08:13.525 "write_zeroes": true, 00:08:13.525 "zcopy": false, 00:08:13.525 "get_zone_info": false, 00:08:13.525 "zone_management": false, 00:08:13.525 "zone_append": false, 00:08:13.525 "compare": false, 00:08:13.525 "compare_and_write": false, 00:08:13.525 "abort": false, 00:08:13.525 "seek_hole": false, 00:08:13.525 "seek_data": false, 00:08:13.525 "copy": false, 00:08:13.525 "nvme_iov_md": false 00:08:13.525 }, 00:08:13.525 "memory_domains": [ 00:08:13.525 { 00:08:13.525 "dma_device_id": "system", 00:08:13.525 "dma_device_type": 1 00:08:13.525 }, 00:08:13.525 { 00:08:13.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.525 "dma_device_type": 2 00:08:13.525 }, 00:08:13.525 { 00:08:13.525 "dma_device_id": "system", 00:08:13.525 "dma_device_type": 1 00:08:13.525 }, 00:08:13.525 { 00:08:13.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.525 "dma_device_type": 2 00:08:13.525 }, 00:08:13.525 { 00:08:13.525 "dma_device_id": "system", 00:08:13.525 "dma_device_type": 1 00:08:13.525 }, 00:08:13.525 { 00:08:13.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.525 "dma_device_type": 2 00:08:13.525 } 00:08:13.525 ], 00:08:13.525 "driver_specific": { 00:08:13.525 "raid": { 00:08:13.525 "uuid": "fbc31656-ab3e-41ba-812b-6926b2e769ba", 00:08:13.525 "strip_size_kb": 64, 00:08:13.525 "state": "online", 00:08:13.525 "raid_level": "raid0", 00:08:13.525 "superblock": true, 00:08:13.525 "num_base_bdevs": 3, 00:08:13.525 "num_base_bdevs_discovered": 3, 00:08:13.525 "num_base_bdevs_operational": 3, 00:08:13.525 "base_bdevs_list": [ 00:08:13.525 { 00:08:13.525 "name": "pt1", 00:08:13.525 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.525 "is_configured": true, 00:08:13.525 "data_offset": 2048, 00:08:13.525 "data_size": 63488 00:08:13.525 }, 00:08:13.525 { 00:08:13.525 "name": "pt2", 00:08:13.525 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.525 "is_configured": true, 00:08:13.525 "data_offset": 2048, 00:08:13.525 "data_size": 63488 00:08:13.525 }, 00:08:13.525 { 00:08:13.525 "name": "pt3", 00:08:13.525 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:13.525 "is_configured": true, 00:08:13.525 "data_offset": 2048, 00:08:13.525 "data_size": 63488 00:08:13.525 } 00:08:13.525 ] 00:08:13.525 } 00:08:13.525 } 00:08:13.525 }' 00:08:13.525 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:13.525 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:13.525 pt2 00:08:13.525 pt3' 00:08:13.525 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.790 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:13.790 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.790 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.790 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:13.790 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.791 [2024-10-15 01:09:26.442911] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fbc31656-ab3e-41ba-812b-6926b2e769ba 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fbc31656-ab3e-41ba-812b-6926b2e769ba ']' 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.791 [2024-10-15 01:09:26.490590] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.791 [2024-10-15 01:09:26.490616] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.791 [2024-10-15 01:09:26.490709] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.791 [2024-10-15 01:09:26.490776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.791 [2024-10-15 01:09:26.490789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.791 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:14.061 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.061 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:14.061 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:14.061 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:14.061 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:14.061 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.061 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.061 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.061 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:14.061 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:14.061 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.061 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.061 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.061 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:14.061 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:14.061 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.061 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.061 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.061 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.062 [2024-10-15 01:09:26.650394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:14.062 [2024-10-15 01:09:26.652769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:14.062 [2024-10-15 01:09:26.652821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:14.062 [2024-10-15 01:09:26.652876] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:14.062 [2024-10-15 01:09:26.652926] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:14.062 [2024-10-15 01:09:26.652964] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:14.062 [2024-10-15 01:09:26.652979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:14.062 [2024-10-15 01:09:26.652991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:14.062 request: 00:08:14.062 { 00:08:14.062 "name": "raid_bdev1", 00:08:14.062 "raid_level": "raid0", 00:08:14.062 "base_bdevs": [ 00:08:14.062 "malloc1", 00:08:14.062 "malloc2", 00:08:14.062 "malloc3" 00:08:14.062 ], 00:08:14.062 "strip_size_kb": 64, 00:08:14.062 "superblock": false, 00:08:14.062 "method": "bdev_raid_create", 00:08:14.062 "req_id": 1 00:08:14.062 } 00:08:14.062 Got JSON-RPC error response 00:08:14.062 response: 00:08:14.062 { 00:08:14.062 "code": -17, 00:08:14.062 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:14.062 } 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.062 [2024-10-15 01:09:26.710261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:14.062 [2024-10-15 01:09:26.710388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.062 [2024-10-15 01:09:26.710438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:14.062 [2024-10-15 01:09:26.710487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.062 [2024-10-15 01:09:26.713100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.062 [2024-10-15 01:09:26.713193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:14.062 [2024-10-15 01:09:26.713306] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:14.062 [2024-10-15 01:09:26.713406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:14.062 pt1 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.062 "name": "raid_bdev1", 00:08:14.062 "uuid": "fbc31656-ab3e-41ba-812b-6926b2e769ba", 00:08:14.062 "strip_size_kb": 64, 00:08:14.062 "state": "configuring", 00:08:14.062 "raid_level": "raid0", 00:08:14.062 "superblock": true, 00:08:14.062 "num_base_bdevs": 3, 00:08:14.062 "num_base_bdevs_discovered": 1, 00:08:14.062 "num_base_bdevs_operational": 3, 00:08:14.062 "base_bdevs_list": [ 00:08:14.062 { 00:08:14.062 "name": "pt1", 00:08:14.062 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.062 "is_configured": true, 00:08:14.062 "data_offset": 2048, 00:08:14.062 "data_size": 63488 00:08:14.062 }, 00:08:14.062 { 00:08:14.062 "name": null, 00:08:14.062 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.062 "is_configured": false, 00:08:14.062 "data_offset": 2048, 00:08:14.062 "data_size": 63488 00:08:14.062 }, 00:08:14.062 { 00:08:14.062 "name": null, 00:08:14.062 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:14.062 "is_configured": false, 00:08:14.062 "data_offset": 2048, 00:08:14.062 "data_size": 63488 00:08:14.062 } 00:08:14.062 ] 00:08:14.062 }' 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.062 01:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.631 [2024-10-15 01:09:27.169481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:14.631 [2024-10-15 01:09:27.169636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.631 [2024-10-15 01:09:27.169664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:14.631 [2024-10-15 01:09:27.169678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.631 [2024-10-15 01:09:27.170104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.631 [2024-10-15 01:09:27.170133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:14.631 [2024-10-15 01:09:27.170231] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:14.631 [2024-10-15 01:09:27.170265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:14.631 pt2 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.631 [2024-10-15 01:09:27.177446] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.631 "name": "raid_bdev1", 00:08:14.631 "uuid": "fbc31656-ab3e-41ba-812b-6926b2e769ba", 00:08:14.631 "strip_size_kb": 64, 00:08:14.631 "state": "configuring", 00:08:14.631 "raid_level": "raid0", 00:08:14.631 "superblock": true, 00:08:14.631 "num_base_bdevs": 3, 00:08:14.631 "num_base_bdevs_discovered": 1, 00:08:14.631 "num_base_bdevs_operational": 3, 00:08:14.631 "base_bdevs_list": [ 00:08:14.631 { 00:08:14.631 "name": "pt1", 00:08:14.631 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.631 "is_configured": true, 00:08:14.631 "data_offset": 2048, 00:08:14.631 "data_size": 63488 00:08:14.631 }, 00:08:14.631 { 00:08:14.631 "name": null, 00:08:14.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.631 "is_configured": false, 00:08:14.631 "data_offset": 0, 00:08:14.631 "data_size": 63488 00:08:14.631 }, 00:08:14.631 { 00:08:14.631 "name": null, 00:08:14.631 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:14.631 "is_configured": false, 00:08:14.631 "data_offset": 2048, 00:08:14.631 "data_size": 63488 00:08:14.631 } 00:08:14.631 ] 00:08:14.631 }' 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.631 01:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.890 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:14.890 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:14.890 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:14.890 01:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.890 01:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.890 [2024-10-15 01:09:27.604724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:14.890 [2024-10-15 01:09:27.604828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.890 [2024-10-15 01:09:27.604869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:14.890 [2024-10-15 01:09:27.604897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.890 [2024-10-15 01:09:27.605309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.890 [2024-10-15 01:09:27.605369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:14.890 [2024-10-15 01:09:27.605468] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:14.890 [2024-10-15 01:09:27.605516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:14.890 pt2 00:08:14.890 01:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.890 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:14.890 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:14.890 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:14.890 01:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.891 01:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.149 [2024-10-15 01:09:27.616701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:15.149 [2024-10-15 01:09:27.616785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.149 [2024-10-15 01:09:27.616821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:15.149 [2024-10-15 01:09:27.616847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.149 [2024-10-15 01:09:27.617226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.149 [2024-10-15 01:09:27.617287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:15.149 [2024-10-15 01:09:27.617371] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:15.149 [2024-10-15 01:09:27.617416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:15.149 [2024-10-15 01:09:27.617535] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:15.149 [2024-10-15 01:09:27.617571] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:15.149 [2024-10-15 01:09:27.617827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:15.149 [2024-10-15 01:09:27.617966] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:15.149 [2024-10-15 01:09:27.618007] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:15.149 [2024-10-15 01:09:27.618138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.149 pt3 00:08:15.149 01:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.149 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:15.149 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:15.149 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:15.149 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.149 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.149 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.149 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.149 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.149 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.149 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.149 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.149 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.149 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.149 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.149 01:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.149 01:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.149 01:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.149 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.149 "name": "raid_bdev1", 00:08:15.149 "uuid": "fbc31656-ab3e-41ba-812b-6926b2e769ba", 00:08:15.149 "strip_size_kb": 64, 00:08:15.149 "state": "online", 00:08:15.149 "raid_level": "raid0", 00:08:15.149 "superblock": true, 00:08:15.149 "num_base_bdevs": 3, 00:08:15.149 "num_base_bdevs_discovered": 3, 00:08:15.149 "num_base_bdevs_operational": 3, 00:08:15.149 "base_bdevs_list": [ 00:08:15.149 { 00:08:15.149 "name": "pt1", 00:08:15.149 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.149 "is_configured": true, 00:08:15.149 "data_offset": 2048, 00:08:15.150 "data_size": 63488 00:08:15.150 }, 00:08:15.150 { 00:08:15.150 "name": "pt2", 00:08:15.150 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.150 "is_configured": true, 00:08:15.150 "data_offset": 2048, 00:08:15.150 "data_size": 63488 00:08:15.150 }, 00:08:15.150 { 00:08:15.150 "name": "pt3", 00:08:15.150 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:15.150 "is_configured": true, 00:08:15.150 "data_offset": 2048, 00:08:15.150 "data_size": 63488 00:08:15.150 } 00:08:15.150 ] 00:08:15.150 }' 00:08:15.150 01:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.150 01:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.415 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:15.415 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:15.415 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:15.415 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:15.415 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:15.415 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:15.415 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:15.415 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:15.415 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.415 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.415 [2024-10-15 01:09:28.020359] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.415 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.415 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:15.415 "name": "raid_bdev1", 00:08:15.415 "aliases": [ 00:08:15.415 "fbc31656-ab3e-41ba-812b-6926b2e769ba" 00:08:15.415 ], 00:08:15.415 "product_name": "Raid Volume", 00:08:15.415 "block_size": 512, 00:08:15.415 "num_blocks": 190464, 00:08:15.415 "uuid": "fbc31656-ab3e-41ba-812b-6926b2e769ba", 00:08:15.415 "assigned_rate_limits": { 00:08:15.415 "rw_ios_per_sec": 0, 00:08:15.415 "rw_mbytes_per_sec": 0, 00:08:15.415 "r_mbytes_per_sec": 0, 00:08:15.415 "w_mbytes_per_sec": 0 00:08:15.415 }, 00:08:15.415 "claimed": false, 00:08:15.415 "zoned": false, 00:08:15.415 "supported_io_types": { 00:08:15.415 "read": true, 00:08:15.415 "write": true, 00:08:15.415 "unmap": true, 00:08:15.415 "flush": true, 00:08:15.415 "reset": true, 00:08:15.415 "nvme_admin": false, 00:08:15.415 "nvme_io": false, 00:08:15.415 "nvme_io_md": false, 00:08:15.415 "write_zeroes": true, 00:08:15.415 "zcopy": false, 00:08:15.415 "get_zone_info": false, 00:08:15.415 "zone_management": false, 00:08:15.415 "zone_append": false, 00:08:15.415 "compare": false, 00:08:15.415 "compare_and_write": false, 00:08:15.416 "abort": false, 00:08:15.416 "seek_hole": false, 00:08:15.416 "seek_data": false, 00:08:15.416 "copy": false, 00:08:15.416 "nvme_iov_md": false 00:08:15.416 }, 00:08:15.416 "memory_domains": [ 00:08:15.416 { 00:08:15.416 "dma_device_id": "system", 00:08:15.416 "dma_device_type": 1 00:08:15.416 }, 00:08:15.416 { 00:08:15.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.416 "dma_device_type": 2 00:08:15.416 }, 00:08:15.416 { 00:08:15.416 "dma_device_id": "system", 00:08:15.416 "dma_device_type": 1 00:08:15.416 }, 00:08:15.416 { 00:08:15.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.416 "dma_device_type": 2 00:08:15.416 }, 00:08:15.416 { 00:08:15.416 "dma_device_id": "system", 00:08:15.416 "dma_device_type": 1 00:08:15.416 }, 00:08:15.416 { 00:08:15.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.416 "dma_device_type": 2 00:08:15.416 } 00:08:15.416 ], 00:08:15.416 "driver_specific": { 00:08:15.416 "raid": { 00:08:15.416 "uuid": "fbc31656-ab3e-41ba-812b-6926b2e769ba", 00:08:15.416 "strip_size_kb": 64, 00:08:15.416 "state": "online", 00:08:15.416 "raid_level": "raid0", 00:08:15.416 "superblock": true, 00:08:15.416 "num_base_bdevs": 3, 00:08:15.416 "num_base_bdevs_discovered": 3, 00:08:15.416 "num_base_bdevs_operational": 3, 00:08:15.416 "base_bdevs_list": [ 00:08:15.416 { 00:08:15.416 "name": "pt1", 00:08:15.416 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.416 "is_configured": true, 00:08:15.416 "data_offset": 2048, 00:08:15.416 "data_size": 63488 00:08:15.416 }, 00:08:15.416 { 00:08:15.416 "name": "pt2", 00:08:15.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.416 "is_configured": true, 00:08:15.416 "data_offset": 2048, 00:08:15.416 "data_size": 63488 00:08:15.416 }, 00:08:15.416 { 00:08:15.416 "name": "pt3", 00:08:15.416 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:15.416 "is_configured": true, 00:08:15.416 "data_offset": 2048, 00:08:15.416 "data_size": 63488 00:08:15.416 } 00:08:15.416 ] 00:08:15.416 } 00:08:15.416 } 00:08:15.416 }' 00:08:15.416 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:15.417 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:15.417 pt2 00:08:15.417 pt3' 00:08:15.417 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.417 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:15.417 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.417 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:15.417 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.417 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.417 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.681 [2024-10-15 01:09:28.287823] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fbc31656-ab3e-41ba-812b-6926b2e769ba '!=' fbc31656-ab3e-41ba-812b-6926b2e769ba ']' 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76000 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 76000 ']' 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 76000 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76000 00:08:15.681 killing process with pid 76000 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76000' 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 76000 00:08:15.681 [2024-10-15 01:09:28.368863] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:15.681 [2024-10-15 01:09:28.368940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.681 [2024-10-15 01:09:28.369005] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.681 [2024-10-15 01:09:28.369015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:15.681 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 76000 00:08:15.681 [2024-10-15 01:09:28.402850] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:15.941 01:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:15.941 00:08:15.941 real 0m3.895s 00:08:15.941 user 0m6.169s 00:08:15.941 sys 0m0.836s 00:08:15.941 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.941 ************************************ 00:08:15.941 END TEST raid_superblock_test 00:08:15.941 ************************************ 00:08:15.941 01:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.201 01:09:28 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:16.201 01:09:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:16.201 01:09:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.201 01:09:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:16.201 ************************************ 00:08:16.201 START TEST raid_read_error_test 00:08:16.201 ************************************ 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.T5rYu4KYTZ 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76242 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76242 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 76242 ']' 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.201 01:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.201 [2024-10-15 01:09:28.786355] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:08:16.201 [2024-10-15 01:09:28.786582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76242 ] 00:08:16.201 [2024-10-15 01:09:28.917279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.461 [2024-10-15 01:09:28.943091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.461 [2024-10-15 01:09:28.986151] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.461 [2024-10-15 01:09:28.986189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.031 BaseBdev1_malloc 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.031 true 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.031 [2024-10-15 01:09:29.660720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:17.031 [2024-10-15 01:09:29.660823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.031 [2024-10-15 01:09:29.660871] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:17.031 [2024-10-15 01:09:29.660899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.031 [2024-10-15 01:09:29.663052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.031 [2024-10-15 01:09:29.663118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:17.031 BaseBdev1 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.031 BaseBdev2_malloc 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.031 true 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.031 [2024-10-15 01:09:29.701687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:17.031 [2024-10-15 01:09:29.701778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.031 [2024-10-15 01:09:29.701814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:17.031 [2024-10-15 01:09:29.701853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.031 [2024-10-15 01:09:29.704083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.031 [2024-10-15 01:09:29.704160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:17.031 BaseBdev2 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.031 BaseBdev3_malloc 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.031 true 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.031 [2024-10-15 01:09:29.742647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:17.031 [2024-10-15 01:09:29.742755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.031 [2024-10-15 01:09:29.742781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:17.031 [2024-10-15 01:09:29.742790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.031 [2024-10-15 01:09:29.744880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.031 [2024-10-15 01:09:29.744914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:17.031 BaseBdev3 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.031 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.291 [2024-10-15 01:09:29.754727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.291 [2024-10-15 01:09:29.756628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.291 [2024-10-15 01:09:29.756721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:17.291 [2024-10-15 01:09:29.756888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:17.291 [2024-10-15 01:09:29.756907] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:17.291 [2024-10-15 01:09:29.757168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:17.291 [2024-10-15 01:09:29.757322] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:17.291 [2024-10-15 01:09:29.757335] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:17.291 [2024-10-15 01:09:29.757469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.291 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.291 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:17.291 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:17.291 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.291 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.291 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.291 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.291 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.291 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.291 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.291 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.291 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.291 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:17.291 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.291 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.291 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.291 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.291 "name": "raid_bdev1", 00:08:17.291 "uuid": "28e2c5bc-00c9-4df3-9164-5cb44ef761de", 00:08:17.291 "strip_size_kb": 64, 00:08:17.291 "state": "online", 00:08:17.292 "raid_level": "raid0", 00:08:17.292 "superblock": true, 00:08:17.292 "num_base_bdevs": 3, 00:08:17.292 "num_base_bdevs_discovered": 3, 00:08:17.292 "num_base_bdevs_operational": 3, 00:08:17.292 "base_bdevs_list": [ 00:08:17.292 { 00:08:17.292 "name": "BaseBdev1", 00:08:17.292 "uuid": "6052309c-8be3-5b64-b514-f2ade69e3621", 00:08:17.292 "is_configured": true, 00:08:17.292 "data_offset": 2048, 00:08:17.292 "data_size": 63488 00:08:17.292 }, 00:08:17.292 { 00:08:17.292 "name": "BaseBdev2", 00:08:17.292 "uuid": "a6f458fc-4994-5afa-8ad8-785660f99ba1", 00:08:17.292 "is_configured": true, 00:08:17.292 "data_offset": 2048, 00:08:17.292 "data_size": 63488 00:08:17.292 }, 00:08:17.292 { 00:08:17.292 "name": "BaseBdev3", 00:08:17.292 "uuid": "7e1d6648-9c98-5be6-bb20-5681057a8592", 00:08:17.292 "is_configured": true, 00:08:17.292 "data_offset": 2048, 00:08:17.292 "data_size": 63488 00:08:17.292 } 00:08:17.292 ] 00:08:17.292 }' 00:08:17.292 01:09:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.292 01:09:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.551 01:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:17.551 01:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:17.811 [2024-10-15 01:09:30.306138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.749 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.749 "name": "raid_bdev1", 00:08:18.749 "uuid": "28e2c5bc-00c9-4df3-9164-5cb44ef761de", 00:08:18.749 "strip_size_kb": 64, 00:08:18.749 "state": "online", 00:08:18.750 "raid_level": "raid0", 00:08:18.750 "superblock": true, 00:08:18.750 "num_base_bdevs": 3, 00:08:18.750 "num_base_bdevs_discovered": 3, 00:08:18.750 "num_base_bdevs_operational": 3, 00:08:18.750 "base_bdevs_list": [ 00:08:18.750 { 00:08:18.750 "name": "BaseBdev1", 00:08:18.750 "uuid": "6052309c-8be3-5b64-b514-f2ade69e3621", 00:08:18.750 "is_configured": true, 00:08:18.750 "data_offset": 2048, 00:08:18.750 "data_size": 63488 00:08:18.750 }, 00:08:18.750 { 00:08:18.750 "name": "BaseBdev2", 00:08:18.750 "uuid": "a6f458fc-4994-5afa-8ad8-785660f99ba1", 00:08:18.750 "is_configured": true, 00:08:18.750 "data_offset": 2048, 00:08:18.750 "data_size": 63488 00:08:18.750 }, 00:08:18.750 { 00:08:18.750 "name": "BaseBdev3", 00:08:18.750 "uuid": "7e1d6648-9c98-5be6-bb20-5681057a8592", 00:08:18.750 "is_configured": true, 00:08:18.750 "data_offset": 2048, 00:08:18.750 "data_size": 63488 00:08:18.750 } 00:08:18.750 ] 00:08:18.750 }' 00:08:18.750 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.750 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.009 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:19.009 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.009 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.009 [2024-10-15 01:09:31.661687] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.009 [2024-10-15 01:09:31.661788] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.009 [2024-10-15 01:09:31.664415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.009 [2024-10-15 01:09:31.664504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.009 [2024-10-15 01:09:31.664558] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.009 [2024-10-15 01:09:31.664614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:19.009 { 00:08:19.009 "results": [ 00:08:19.009 { 00:08:19.009 "job": "raid_bdev1", 00:08:19.009 "core_mask": "0x1", 00:08:19.009 "workload": "randrw", 00:08:19.009 "percentage": 50, 00:08:19.009 "status": "finished", 00:08:19.009 "queue_depth": 1, 00:08:19.009 "io_size": 131072, 00:08:19.009 "runtime": 1.356322, 00:08:19.009 "iops": 16911.913247739107, 00:08:19.009 "mibps": 2113.9891559673883, 00:08:19.009 "io_failed": 1, 00:08:19.009 "io_timeout": 0, 00:08:19.009 "avg_latency_us": 81.90719910086196, 00:08:19.009 "min_latency_us": 23.02882096069869, 00:08:19.009 "max_latency_us": 1359.3711790393013 00:08:19.009 } 00:08:19.009 ], 00:08:19.009 "core_count": 1 00:08:19.009 } 00:08:19.009 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.009 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76242 00:08:19.009 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 76242 ']' 00:08:19.009 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 76242 00:08:19.009 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:19.009 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.009 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76242 00:08:19.009 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:19.009 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:19.009 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76242' 00:08:19.009 killing process with pid 76242 00:08:19.009 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 76242 00:08:19.009 [2024-10-15 01:09:31.702866] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:19.009 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 76242 00:08:19.009 [2024-10-15 01:09:31.728767] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:19.269 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.T5rYu4KYTZ 00:08:19.269 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:19.269 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:19.269 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:19.269 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:19.269 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:19.269 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:19.269 01:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:19.269 00:08:19.269 real 0m3.253s 00:08:19.269 user 0m4.186s 00:08:19.269 sys 0m0.471s 00:08:19.269 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.269 01:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.269 ************************************ 00:08:19.269 END TEST raid_read_error_test 00:08:19.269 ************************************ 00:08:19.530 01:09:32 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:19.530 01:09:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:19.530 01:09:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.530 01:09:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:19.530 ************************************ 00:08:19.530 START TEST raid_write_error_test 00:08:19.530 ************************************ 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QJTu0OGuJl 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76371 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76371 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 76371 ']' 00:08:19.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:19.530 01:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.530 [2024-10-15 01:09:32.112002] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:08:19.531 [2024-10-15 01:09:32.112132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76371 ] 00:08:19.531 [2024-10-15 01:09:32.238440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.790 [2024-10-15 01:09:32.266135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.790 [2024-10-15 01:09:32.309323] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.790 [2024-10-15 01:09:32.309445] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.359 01:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:20.359 01:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:20.359 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:20.359 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:20.359 01:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.359 01:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.359 BaseBdev1_malloc 00:08:20.359 01:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.359 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:20.359 01:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.359 01:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.359 true 00:08:20.359 01:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.359 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:20.359 01:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.359 01:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.359 [2024-10-15 01:09:32.992140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:20.359 [2024-10-15 01:09:32.992212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.359 [2024-10-15 01:09:32.992234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:20.359 [2024-10-15 01:09:32.992243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.359 [2024-10-15 01:09:32.994342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.359 [2024-10-15 01:09:32.994374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:20.359 BaseBdev1 00:08:20.359 01:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.359 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:20.359 01:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:20.359 01:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.359 01:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.359 BaseBdev2_malloc 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.359 true 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.359 [2024-10-15 01:09:33.032761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:20.359 [2024-10-15 01:09:33.032811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.359 [2024-10-15 01:09:33.032829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:20.359 [2024-10-15 01:09:33.032846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.359 [2024-10-15 01:09:33.034892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.359 [2024-10-15 01:09:33.034926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:20.359 BaseBdev2 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.359 BaseBdev3_malloc 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.359 true 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.359 [2024-10-15 01:09:33.073354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:20.359 [2024-10-15 01:09:33.073407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.359 [2024-10-15 01:09:33.073430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:20.359 [2024-10-15 01:09:33.073438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.359 [2024-10-15 01:09:33.075480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.359 [2024-10-15 01:09:33.075513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:20.359 BaseBdev3 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.359 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.619 [2024-10-15 01:09:33.085415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:20.619 [2024-10-15 01:09:33.087241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:20.619 [2024-10-15 01:09:33.087314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:20.619 [2024-10-15 01:09:33.087508] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:20.619 [2024-10-15 01:09:33.087523] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:20.619 [2024-10-15 01:09:33.087771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:20.619 [2024-10-15 01:09:33.087922] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:20.619 [2024-10-15 01:09:33.087932] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:20.619 [2024-10-15 01:09:33.088054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.619 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.619 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:20.619 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.619 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.619 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.619 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.619 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.619 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.619 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.619 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.619 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.619 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.619 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.619 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.619 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.619 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.619 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.619 "name": "raid_bdev1", 00:08:20.619 "uuid": "6f0ad57c-8702-4245-82ef-90095607b41e", 00:08:20.619 "strip_size_kb": 64, 00:08:20.619 "state": "online", 00:08:20.619 "raid_level": "raid0", 00:08:20.619 "superblock": true, 00:08:20.619 "num_base_bdevs": 3, 00:08:20.619 "num_base_bdevs_discovered": 3, 00:08:20.619 "num_base_bdevs_operational": 3, 00:08:20.619 "base_bdevs_list": [ 00:08:20.619 { 00:08:20.619 "name": "BaseBdev1", 00:08:20.619 "uuid": "08f3cab9-c703-53b5-9018-35b16b93dcab", 00:08:20.619 "is_configured": true, 00:08:20.619 "data_offset": 2048, 00:08:20.619 "data_size": 63488 00:08:20.619 }, 00:08:20.619 { 00:08:20.619 "name": "BaseBdev2", 00:08:20.619 "uuid": "64b68950-d1b1-5095-aaa0-7365adb10477", 00:08:20.619 "is_configured": true, 00:08:20.619 "data_offset": 2048, 00:08:20.619 "data_size": 63488 00:08:20.619 }, 00:08:20.619 { 00:08:20.619 "name": "BaseBdev3", 00:08:20.619 "uuid": "ff44da78-1a3c-59af-84f7-ac8509e007ab", 00:08:20.619 "is_configured": true, 00:08:20.619 "data_offset": 2048, 00:08:20.619 "data_size": 63488 00:08:20.619 } 00:08:20.619 ] 00:08:20.619 }' 00:08:20.619 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.619 01:09:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.885 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:20.885 01:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:20.885 [2024-10-15 01:09:33.604932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:21.826 01:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:21.826 01:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.826 01:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.826 01:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.826 01:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:21.826 01:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:21.826 01:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:21.826 01:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:21.826 01:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.826 01:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.826 01:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.826 01:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.826 01:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.826 01:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.826 01:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.826 01:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.826 01:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.086 01:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.086 01:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.086 01:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.086 01:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.086 01:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.086 01:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.086 "name": "raid_bdev1", 00:08:22.086 "uuid": "6f0ad57c-8702-4245-82ef-90095607b41e", 00:08:22.086 "strip_size_kb": 64, 00:08:22.086 "state": "online", 00:08:22.086 "raid_level": "raid0", 00:08:22.086 "superblock": true, 00:08:22.086 "num_base_bdevs": 3, 00:08:22.086 "num_base_bdevs_discovered": 3, 00:08:22.086 "num_base_bdevs_operational": 3, 00:08:22.086 "base_bdevs_list": [ 00:08:22.086 { 00:08:22.086 "name": "BaseBdev1", 00:08:22.086 "uuid": "08f3cab9-c703-53b5-9018-35b16b93dcab", 00:08:22.086 "is_configured": true, 00:08:22.086 "data_offset": 2048, 00:08:22.086 "data_size": 63488 00:08:22.086 }, 00:08:22.086 { 00:08:22.086 "name": "BaseBdev2", 00:08:22.086 "uuid": "64b68950-d1b1-5095-aaa0-7365adb10477", 00:08:22.086 "is_configured": true, 00:08:22.086 "data_offset": 2048, 00:08:22.086 "data_size": 63488 00:08:22.086 }, 00:08:22.086 { 00:08:22.086 "name": "BaseBdev3", 00:08:22.086 "uuid": "ff44da78-1a3c-59af-84f7-ac8509e007ab", 00:08:22.086 "is_configured": true, 00:08:22.086 "data_offset": 2048, 00:08:22.086 "data_size": 63488 00:08:22.086 } 00:08:22.086 ] 00:08:22.086 }' 00:08:22.086 01:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.086 01:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.346 01:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:22.346 01:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.346 01:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.346 [2024-10-15 01:09:34.975364] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:22.346 [2024-10-15 01:09:34.975464] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.346 [2024-10-15 01:09:34.978058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.346 [2024-10-15 01:09:34.978169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.346 [2024-10-15 01:09:34.978250] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.346 [2024-10-15 01:09:34.978318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:22.346 { 00:08:22.346 "results": [ 00:08:22.346 { 00:08:22.346 "job": "raid_bdev1", 00:08:22.346 "core_mask": "0x1", 00:08:22.346 "workload": "randrw", 00:08:22.346 "percentage": 50, 00:08:22.346 "status": "finished", 00:08:22.346 "queue_depth": 1, 00:08:22.346 "io_size": 131072, 00:08:22.346 "runtime": 1.371262, 00:08:22.346 "iops": 17248.34495523102, 00:08:22.346 "mibps": 2156.0431194038774, 00:08:22.346 "io_failed": 1, 00:08:22.346 "io_timeout": 0, 00:08:22.346 "avg_latency_us": 80.33806692357129, 00:08:22.346 "min_latency_us": 22.134497816593885, 00:08:22.346 "max_latency_us": 1409.4532751091704 00:08:22.346 } 00:08:22.346 ], 00:08:22.346 "core_count": 1 00:08:22.346 } 00:08:22.346 01:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.346 01:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76371 00:08:22.346 01:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 76371 ']' 00:08:22.346 01:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 76371 00:08:22.346 01:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:22.346 01:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:22.346 01:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76371 00:08:22.346 killing process with pid 76371 00:08:22.346 01:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:22.346 01:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:22.346 01:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76371' 00:08:22.346 01:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 76371 00:08:22.346 [2024-10-15 01:09:35.033861] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:22.346 01:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 76371 00:08:22.346 [2024-10-15 01:09:35.059922] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:22.606 01:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QJTu0OGuJl 00:08:22.606 01:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:22.606 01:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:22.606 01:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:22.606 01:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:22.606 ************************************ 00:08:22.606 END TEST raid_write_error_test 00:08:22.606 ************************************ 00:08:22.606 01:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:22.606 01:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:22.606 01:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:22.606 00:08:22.606 real 0m3.255s 00:08:22.606 user 0m4.143s 00:08:22.606 sys 0m0.486s 00:08:22.606 01:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.606 01:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.606 01:09:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:22.606 01:09:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:22.606 01:09:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:22.606 01:09:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.606 01:09:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:22.866 ************************************ 00:08:22.866 START TEST raid_state_function_test 00:08:22.866 ************************************ 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76498 00:08:22.866 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:22.867 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76498' 00:08:22.867 Process raid pid: 76498 00:08:22.867 01:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76498 00:08:22.867 01:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 76498 ']' 00:08:22.867 01:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.867 01:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.867 01:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.867 01:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.867 01:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.867 [2024-10-15 01:09:35.430050] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:08:22.867 [2024-10-15 01:09:35.430275] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.867 [2024-10-15 01:09:35.574978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.127 [2024-10-15 01:09:35.601877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.127 [2024-10-15 01:09:35.644584] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.127 [2024-10-15 01:09:35.644706] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.696 [2024-10-15 01:09:36.326803] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:23.696 [2024-10-15 01:09:36.326865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:23.696 [2024-10-15 01:09:36.326882] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.696 [2024-10-15 01:09:36.326892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.696 [2024-10-15 01:09:36.326899] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:23.696 [2024-10-15 01:09:36.326909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.696 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.696 "name": "Existed_Raid", 00:08:23.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.696 "strip_size_kb": 64, 00:08:23.696 "state": "configuring", 00:08:23.696 "raid_level": "concat", 00:08:23.696 "superblock": false, 00:08:23.696 "num_base_bdevs": 3, 00:08:23.696 "num_base_bdevs_discovered": 0, 00:08:23.696 "num_base_bdevs_operational": 3, 00:08:23.696 "base_bdevs_list": [ 00:08:23.696 { 00:08:23.696 "name": "BaseBdev1", 00:08:23.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.697 "is_configured": false, 00:08:23.697 "data_offset": 0, 00:08:23.697 "data_size": 0 00:08:23.697 }, 00:08:23.697 { 00:08:23.697 "name": "BaseBdev2", 00:08:23.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.697 "is_configured": false, 00:08:23.697 "data_offset": 0, 00:08:23.697 "data_size": 0 00:08:23.697 }, 00:08:23.697 { 00:08:23.697 "name": "BaseBdev3", 00:08:23.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.697 "is_configured": false, 00:08:23.697 "data_offset": 0, 00:08:23.697 "data_size": 0 00:08:23.697 } 00:08:23.697 ] 00:08:23.697 }' 00:08:23.697 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.697 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.267 [2024-10-15 01:09:36.730018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:24.267 [2024-10-15 01:09:36.730117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.267 [2024-10-15 01:09:36.738019] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:24.267 [2024-10-15 01:09:36.738098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:24.267 [2024-10-15 01:09:36.738124] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:24.267 [2024-10-15 01:09:36.738146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:24.267 [2024-10-15 01:09:36.738164] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:24.267 [2024-10-15 01:09:36.738196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.267 [2024-10-15 01:09:36.754881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.267 BaseBdev1 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.267 [ 00:08:24.267 { 00:08:24.267 "name": "BaseBdev1", 00:08:24.267 "aliases": [ 00:08:24.267 "1e6bff38-a8f7-47e0-8494-5610d60d03a1" 00:08:24.267 ], 00:08:24.267 "product_name": "Malloc disk", 00:08:24.267 "block_size": 512, 00:08:24.267 "num_blocks": 65536, 00:08:24.267 "uuid": "1e6bff38-a8f7-47e0-8494-5610d60d03a1", 00:08:24.267 "assigned_rate_limits": { 00:08:24.267 "rw_ios_per_sec": 0, 00:08:24.267 "rw_mbytes_per_sec": 0, 00:08:24.267 "r_mbytes_per_sec": 0, 00:08:24.267 "w_mbytes_per_sec": 0 00:08:24.267 }, 00:08:24.267 "claimed": true, 00:08:24.267 "claim_type": "exclusive_write", 00:08:24.267 "zoned": false, 00:08:24.267 "supported_io_types": { 00:08:24.267 "read": true, 00:08:24.267 "write": true, 00:08:24.267 "unmap": true, 00:08:24.267 "flush": true, 00:08:24.267 "reset": true, 00:08:24.267 "nvme_admin": false, 00:08:24.267 "nvme_io": false, 00:08:24.267 "nvme_io_md": false, 00:08:24.267 "write_zeroes": true, 00:08:24.267 "zcopy": true, 00:08:24.267 "get_zone_info": false, 00:08:24.267 "zone_management": false, 00:08:24.267 "zone_append": false, 00:08:24.267 "compare": false, 00:08:24.267 "compare_and_write": false, 00:08:24.267 "abort": true, 00:08:24.267 "seek_hole": false, 00:08:24.267 "seek_data": false, 00:08:24.267 "copy": true, 00:08:24.267 "nvme_iov_md": false 00:08:24.267 }, 00:08:24.267 "memory_domains": [ 00:08:24.267 { 00:08:24.267 "dma_device_id": "system", 00:08:24.267 "dma_device_type": 1 00:08:24.267 }, 00:08:24.267 { 00:08:24.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.267 "dma_device_type": 2 00:08:24.267 } 00:08:24.267 ], 00:08:24.267 "driver_specific": {} 00:08:24.267 } 00:08:24.267 ] 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.267 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.268 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.268 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.268 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.268 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.268 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.268 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.268 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.268 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.268 "name": "Existed_Raid", 00:08:24.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.268 "strip_size_kb": 64, 00:08:24.268 "state": "configuring", 00:08:24.268 "raid_level": "concat", 00:08:24.268 "superblock": false, 00:08:24.268 "num_base_bdevs": 3, 00:08:24.268 "num_base_bdevs_discovered": 1, 00:08:24.268 "num_base_bdevs_operational": 3, 00:08:24.268 "base_bdevs_list": [ 00:08:24.268 { 00:08:24.268 "name": "BaseBdev1", 00:08:24.268 "uuid": "1e6bff38-a8f7-47e0-8494-5610d60d03a1", 00:08:24.268 "is_configured": true, 00:08:24.268 "data_offset": 0, 00:08:24.268 "data_size": 65536 00:08:24.268 }, 00:08:24.268 { 00:08:24.268 "name": "BaseBdev2", 00:08:24.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.268 "is_configured": false, 00:08:24.268 "data_offset": 0, 00:08:24.268 "data_size": 0 00:08:24.268 }, 00:08:24.268 { 00:08:24.268 "name": "BaseBdev3", 00:08:24.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.268 "is_configured": false, 00:08:24.268 "data_offset": 0, 00:08:24.268 "data_size": 0 00:08:24.268 } 00:08:24.268 ] 00:08:24.268 }' 00:08:24.268 01:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.268 01:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.528 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:24.528 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.528 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.528 [2024-10-15 01:09:37.146259] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:24.528 [2024-10-15 01:09:37.146316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:24.528 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.529 [2024-10-15 01:09:37.154353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.529 [2024-10-15 01:09:37.156613] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:24.529 [2024-10-15 01:09:37.156702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:24.529 [2024-10-15 01:09:37.156737] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:24.529 [2024-10-15 01:09:37.156766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.529 "name": "Existed_Raid", 00:08:24.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.529 "strip_size_kb": 64, 00:08:24.529 "state": "configuring", 00:08:24.529 "raid_level": "concat", 00:08:24.529 "superblock": false, 00:08:24.529 "num_base_bdevs": 3, 00:08:24.529 "num_base_bdevs_discovered": 1, 00:08:24.529 "num_base_bdevs_operational": 3, 00:08:24.529 "base_bdevs_list": [ 00:08:24.529 { 00:08:24.529 "name": "BaseBdev1", 00:08:24.529 "uuid": "1e6bff38-a8f7-47e0-8494-5610d60d03a1", 00:08:24.529 "is_configured": true, 00:08:24.529 "data_offset": 0, 00:08:24.529 "data_size": 65536 00:08:24.529 }, 00:08:24.529 { 00:08:24.529 "name": "BaseBdev2", 00:08:24.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.529 "is_configured": false, 00:08:24.529 "data_offset": 0, 00:08:24.529 "data_size": 0 00:08:24.529 }, 00:08:24.529 { 00:08:24.529 "name": "BaseBdev3", 00:08:24.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.529 "is_configured": false, 00:08:24.529 "data_offset": 0, 00:08:24.529 "data_size": 0 00:08:24.529 } 00:08:24.529 ] 00:08:24.529 }' 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.529 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.098 [2024-10-15 01:09:37.584538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.098 BaseBdev2 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.098 [ 00:08:25.098 { 00:08:25.098 "name": "BaseBdev2", 00:08:25.098 "aliases": [ 00:08:25.098 "25dda263-ed17-474a-ba59-d693dae69e02" 00:08:25.098 ], 00:08:25.098 "product_name": "Malloc disk", 00:08:25.098 "block_size": 512, 00:08:25.098 "num_blocks": 65536, 00:08:25.098 "uuid": "25dda263-ed17-474a-ba59-d693dae69e02", 00:08:25.098 "assigned_rate_limits": { 00:08:25.098 "rw_ios_per_sec": 0, 00:08:25.098 "rw_mbytes_per_sec": 0, 00:08:25.098 "r_mbytes_per_sec": 0, 00:08:25.098 "w_mbytes_per_sec": 0 00:08:25.098 }, 00:08:25.098 "claimed": true, 00:08:25.098 "claim_type": "exclusive_write", 00:08:25.098 "zoned": false, 00:08:25.098 "supported_io_types": { 00:08:25.098 "read": true, 00:08:25.098 "write": true, 00:08:25.098 "unmap": true, 00:08:25.098 "flush": true, 00:08:25.098 "reset": true, 00:08:25.098 "nvme_admin": false, 00:08:25.098 "nvme_io": false, 00:08:25.098 "nvme_io_md": false, 00:08:25.098 "write_zeroes": true, 00:08:25.098 "zcopy": true, 00:08:25.098 "get_zone_info": false, 00:08:25.098 "zone_management": false, 00:08:25.098 "zone_append": false, 00:08:25.098 "compare": false, 00:08:25.098 "compare_and_write": false, 00:08:25.098 "abort": true, 00:08:25.098 "seek_hole": false, 00:08:25.098 "seek_data": false, 00:08:25.098 "copy": true, 00:08:25.098 "nvme_iov_md": false 00:08:25.098 }, 00:08:25.098 "memory_domains": [ 00:08:25.098 { 00:08:25.098 "dma_device_id": "system", 00:08:25.098 "dma_device_type": 1 00:08:25.098 }, 00:08:25.098 { 00:08:25.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.098 "dma_device_type": 2 00:08:25.098 } 00:08:25.098 ], 00:08:25.098 "driver_specific": {} 00:08:25.098 } 00:08:25.098 ] 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:25.098 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.099 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.099 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.099 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.099 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.099 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.099 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.099 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.099 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.099 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.099 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.099 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.099 "name": "Existed_Raid", 00:08:25.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.099 "strip_size_kb": 64, 00:08:25.099 "state": "configuring", 00:08:25.099 "raid_level": "concat", 00:08:25.099 "superblock": false, 00:08:25.099 "num_base_bdevs": 3, 00:08:25.099 "num_base_bdevs_discovered": 2, 00:08:25.099 "num_base_bdevs_operational": 3, 00:08:25.099 "base_bdevs_list": [ 00:08:25.099 { 00:08:25.099 "name": "BaseBdev1", 00:08:25.099 "uuid": "1e6bff38-a8f7-47e0-8494-5610d60d03a1", 00:08:25.099 "is_configured": true, 00:08:25.099 "data_offset": 0, 00:08:25.099 "data_size": 65536 00:08:25.099 }, 00:08:25.099 { 00:08:25.099 "name": "BaseBdev2", 00:08:25.099 "uuid": "25dda263-ed17-474a-ba59-d693dae69e02", 00:08:25.099 "is_configured": true, 00:08:25.099 "data_offset": 0, 00:08:25.099 "data_size": 65536 00:08:25.099 }, 00:08:25.099 { 00:08:25.099 "name": "BaseBdev3", 00:08:25.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.099 "is_configured": false, 00:08:25.099 "data_offset": 0, 00:08:25.099 "data_size": 0 00:08:25.099 } 00:08:25.099 ] 00:08:25.099 }' 00:08:25.099 01:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.099 01:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.359 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:25.359 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.359 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.359 [2024-10-15 01:09:38.066074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:25.359 [2024-10-15 01:09:38.066359] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:25.359 [2024-10-15 01:09:38.066478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:25.359 [2024-10-15 01:09:38.067461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:25.359 [2024-10-15 01:09:38.068070] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:25.359 [2024-10-15 01:09:38.068253] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:25.359 BaseBdev3 00:08:25.359 [2024-10-15 01:09:38.069037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.359 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.359 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:25.359 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:25.359 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:25.359 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:25.359 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:25.359 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:25.359 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:25.359 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.359 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.619 [ 00:08:25.619 { 00:08:25.619 "name": "BaseBdev3", 00:08:25.619 "aliases": [ 00:08:25.619 "c769a00e-3a6c-4c67-ab2b-580bff11b8d6" 00:08:25.619 ], 00:08:25.619 "product_name": "Malloc disk", 00:08:25.619 "block_size": 512, 00:08:25.619 "num_blocks": 65536, 00:08:25.619 "uuid": "c769a00e-3a6c-4c67-ab2b-580bff11b8d6", 00:08:25.619 "assigned_rate_limits": { 00:08:25.619 "rw_ios_per_sec": 0, 00:08:25.619 "rw_mbytes_per_sec": 0, 00:08:25.619 "r_mbytes_per_sec": 0, 00:08:25.619 "w_mbytes_per_sec": 0 00:08:25.619 }, 00:08:25.619 "claimed": true, 00:08:25.619 "claim_type": "exclusive_write", 00:08:25.619 "zoned": false, 00:08:25.619 "supported_io_types": { 00:08:25.619 "read": true, 00:08:25.619 "write": true, 00:08:25.619 "unmap": true, 00:08:25.619 "flush": true, 00:08:25.619 "reset": true, 00:08:25.619 "nvme_admin": false, 00:08:25.619 "nvme_io": false, 00:08:25.619 "nvme_io_md": false, 00:08:25.619 "write_zeroes": true, 00:08:25.619 "zcopy": true, 00:08:25.619 "get_zone_info": false, 00:08:25.619 "zone_management": false, 00:08:25.619 "zone_append": false, 00:08:25.619 "compare": false, 00:08:25.619 "compare_and_write": false, 00:08:25.619 "abort": true, 00:08:25.619 "seek_hole": false, 00:08:25.619 "seek_data": false, 00:08:25.619 "copy": true, 00:08:25.619 "nvme_iov_md": false 00:08:25.619 }, 00:08:25.619 "memory_domains": [ 00:08:25.619 { 00:08:25.619 "dma_device_id": "system", 00:08:25.619 "dma_device_type": 1 00:08:25.619 }, 00:08:25.619 { 00:08:25.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.619 "dma_device_type": 2 00:08:25.619 } 00:08:25.619 ], 00:08:25.619 "driver_specific": {} 00:08:25.619 } 00:08:25.619 ] 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.619 "name": "Existed_Raid", 00:08:25.619 "uuid": "df83e0b2-8b51-4d75-bc48-c1eb22385625", 00:08:25.619 "strip_size_kb": 64, 00:08:25.619 "state": "online", 00:08:25.619 "raid_level": "concat", 00:08:25.619 "superblock": false, 00:08:25.619 "num_base_bdevs": 3, 00:08:25.619 "num_base_bdevs_discovered": 3, 00:08:25.619 "num_base_bdevs_operational": 3, 00:08:25.619 "base_bdevs_list": [ 00:08:25.619 { 00:08:25.619 "name": "BaseBdev1", 00:08:25.619 "uuid": "1e6bff38-a8f7-47e0-8494-5610d60d03a1", 00:08:25.619 "is_configured": true, 00:08:25.619 "data_offset": 0, 00:08:25.619 "data_size": 65536 00:08:25.619 }, 00:08:25.619 { 00:08:25.619 "name": "BaseBdev2", 00:08:25.619 "uuid": "25dda263-ed17-474a-ba59-d693dae69e02", 00:08:25.619 "is_configured": true, 00:08:25.619 "data_offset": 0, 00:08:25.619 "data_size": 65536 00:08:25.619 }, 00:08:25.619 { 00:08:25.619 "name": "BaseBdev3", 00:08:25.619 "uuid": "c769a00e-3a6c-4c67-ab2b-580bff11b8d6", 00:08:25.619 "is_configured": true, 00:08:25.619 "data_offset": 0, 00:08:25.619 "data_size": 65536 00:08:25.619 } 00:08:25.619 ] 00:08:25.619 }' 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.619 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.879 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:25.879 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:25.879 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:25.879 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:25.879 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:25.879 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:25.879 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:25.879 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.879 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.879 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:25.879 [2024-10-15 01:09:38.569497] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.879 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.139 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:26.139 "name": "Existed_Raid", 00:08:26.139 "aliases": [ 00:08:26.139 "df83e0b2-8b51-4d75-bc48-c1eb22385625" 00:08:26.139 ], 00:08:26.139 "product_name": "Raid Volume", 00:08:26.139 "block_size": 512, 00:08:26.139 "num_blocks": 196608, 00:08:26.140 "uuid": "df83e0b2-8b51-4d75-bc48-c1eb22385625", 00:08:26.140 "assigned_rate_limits": { 00:08:26.140 "rw_ios_per_sec": 0, 00:08:26.140 "rw_mbytes_per_sec": 0, 00:08:26.140 "r_mbytes_per_sec": 0, 00:08:26.140 "w_mbytes_per_sec": 0 00:08:26.140 }, 00:08:26.140 "claimed": false, 00:08:26.140 "zoned": false, 00:08:26.140 "supported_io_types": { 00:08:26.140 "read": true, 00:08:26.140 "write": true, 00:08:26.140 "unmap": true, 00:08:26.140 "flush": true, 00:08:26.140 "reset": true, 00:08:26.140 "nvme_admin": false, 00:08:26.140 "nvme_io": false, 00:08:26.140 "nvme_io_md": false, 00:08:26.140 "write_zeroes": true, 00:08:26.140 "zcopy": false, 00:08:26.140 "get_zone_info": false, 00:08:26.140 "zone_management": false, 00:08:26.140 "zone_append": false, 00:08:26.140 "compare": false, 00:08:26.140 "compare_and_write": false, 00:08:26.140 "abort": false, 00:08:26.140 "seek_hole": false, 00:08:26.140 "seek_data": false, 00:08:26.140 "copy": false, 00:08:26.140 "nvme_iov_md": false 00:08:26.140 }, 00:08:26.140 "memory_domains": [ 00:08:26.140 { 00:08:26.140 "dma_device_id": "system", 00:08:26.140 "dma_device_type": 1 00:08:26.140 }, 00:08:26.140 { 00:08:26.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.140 "dma_device_type": 2 00:08:26.140 }, 00:08:26.140 { 00:08:26.140 "dma_device_id": "system", 00:08:26.140 "dma_device_type": 1 00:08:26.140 }, 00:08:26.140 { 00:08:26.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.140 "dma_device_type": 2 00:08:26.140 }, 00:08:26.140 { 00:08:26.140 "dma_device_id": "system", 00:08:26.140 "dma_device_type": 1 00:08:26.140 }, 00:08:26.140 { 00:08:26.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.140 "dma_device_type": 2 00:08:26.140 } 00:08:26.140 ], 00:08:26.140 "driver_specific": { 00:08:26.140 "raid": { 00:08:26.140 "uuid": "df83e0b2-8b51-4d75-bc48-c1eb22385625", 00:08:26.140 "strip_size_kb": 64, 00:08:26.140 "state": "online", 00:08:26.140 "raid_level": "concat", 00:08:26.140 "superblock": false, 00:08:26.140 "num_base_bdevs": 3, 00:08:26.140 "num_base_bdevs_discovered": 3, 00:08:26.140 "num_base_bdevs_operational": 3, 00:08:26.140 "base_bdevs_list": [ 00:08:26.140 { 00:08:26.140 "name": "BaseBdev1", 00:08:26.140 "uuid": "1e6bff38-a8f7-47e0-8494-5610d60d03a1", 00:08:26.140 "is_configured": true, 00:08:26.140 "data_offset": 0, 00:08:26.140 "data_size": 65536 00:08:26.140 }, 00:08:26.140 { 00:08:26.140 "name": "BaseBdev2", 00:08:26.140 "uuid": "25dda263-ed17-474a-ba59-d693dae69e02", 00:08:26.140 "is_configured": true, 00:08:26.140 "data_offset": 0, 00:08:26.140 "data_size": 65536 00:08:26.140 }, 00:08:26.140 { 00:08:26.140 "name": "BaseBdev3", 00:08:26.140 "uuid": "c769a00e-3a6c-4c67-ab2b-580bff11b8d6", 00:08:26.140 "is_configured": true, 00:08:26.140 "data_offset": 0, 00:08:26.140 "data_size": 65536 00:08:26.140 } 00:08:26.140 ] 00:08:26.140 } 00:08:26.140 } 00:08:26.140 }' 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:26.140 BaseBdev2 00:08:26.140 BaseBdev3' 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.140 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.400 [2024-10-15 01:09:38.864727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:26.400 [2024-10-15 01:09:38.864796] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:26.400 [2024-10-15 01:09:38.864876] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.400 "name": "Existed_Raid", 00:08:26.400 "uuid": "df83e0b2-8b51-4d75-bc48-c1eb22385625", 00:08:26.400 "strip_size_kb": 64, 00:08:26.400 "state": "offline", 00:08:26.400 "raid_level": "concat", 00:08:26.400 "superblock": false, 00:08:26.400 "num_base_bdevs": 3, 00:08:26.400 "num_base_bdevs_discovered": 2, 00:08:26.400 "num_base_bdevs_operational": 2, 00:08:26.400 "base_bdevs_list": [ 00:08:26.400 { 00:08:26.400 "name": null, 00:08:26.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.400 "is_configured": false, 00:08:26.400 "data_offset": 0, 00:08:26.400 "data_size": 65536 00:08:26.400 }, 00:08:26.400 { 00:08:26.400 "name": "BaseBdev2", 00:08:26.400 "uuid": "25dda263-ed17-474a-ba59-d693dae69e02", 00:08:26.400 "is_configured": true, 00:08:26.400 "data_offset": 0, 00:08:26.400 "data_size": 65536 00:08:26.400 }, 00:08:26.400 { 00:08:26.400 "name": "BaseBdev3", 00:08:26.400 "uuid": "c769a00e-3a6c-4c67-ab2b-580bff11b8d6", 00:08:26.400 "is_configured": true, 00:08:26.400 "data_offset": 0, 00:08:26.400 "data_size": 65536 00:08:26.400 } 00:08:26.400 ] 00:08:26.400 }' 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.400 01:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.660 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:26.660 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:26.660 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.660 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:26.660 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.660 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.660 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.660 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:26.660 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:26.660 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:26.660 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.660 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.660 [2024-10-15 01:09:39.371529] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.921 [2024-10-15 01:09:39.442727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:26.921 [2024-10-15 01:09:39.442774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.921 BaseBdev2 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.921 [ 00:08:26.921 { 00:08:26.921 "name": "BaseBdev2", 00:08:26.921 "aliases": [ 00:08:26.921 "e508e15e-8f4f-4569-aefd-cb9bc5c37a35" 00:08:26.921 ], 00:08:26.921 "product_name": "Malloc disk", 00:08:26.921 "block_size": 512, 00:08:26.921 "num_blocks": 65536, 00:08:26.921 "uuid": "e508e15e-8f4f-4569-aefd-cb9bc5c37a35", 00:08:26.921 "assigned_rate_limits": { 00:08:26.921 "rw_ios_per_sec": 0, 00:08:26.921 "rw_mbytes_per_sec": 0, 00:08:26.921 "r_mbytes_per_sec": 0, 00:08:26.921 "w_mbytes_per_sec": 0 00:08:26.921 }, 00:08:26.921 "claimed": false, 00:08:26.921 "zoned": false, 00:08:26.921 "supported_io_types": { 00:08:26.921 "read": true, 00:08:26.921 "write": true, 00:08:26.921 "unmap": true, 00:08:26.921 "flush": true, 00:08:26.921 "reset": true, 00:08:26.921 "nvme_admin": false, 00:08:26.921 "nvme_io": false, 00:08:26.921 "nvme_io_md": false, 00:08:26.921 "write_zeroes": true, 00:08:26.921 "zcopy": true, 00:08:26.921 "get_zone_info": false, 00:08:26.921 "zone_management": false, 00:08:26.921 "zone_append": false, 00:08:26.921 "compare": false, 00:08:26.921 "compare_and_write": false, 00:08:26.921 "abort": true, 00:08:26.921 "seek_hole": false, 00:08:26.921 "seek_data": false, 00:08:26.921 "copy": true, 00:08:26.921 "nvme_iov_md": false 00:08:26.921 }, 00:08:26.921 "memory_domains": [ 00:08:26.921 { 00:08:26.921 "dma_device_id": "system", 00:08:26.921 "dma_device_type": 1 00:08:26.921 }, 00:08:26.921 { 00:08:26.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.921 "dma_device_type": 2 00:08:26.921 } 00:08:26.921 ], 00:08:26.921 "driver_specific": {} 00:08:26.921 } 00:08:26.921 ] 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.921 BaseBdev3 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.921 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.921 [ 00:08:26.921 { 00:08:26.921 "name": "BaseBdev3", 00:08:26.921 "aliases": [ 00:08:26.921 "686c05fd-e19d-4fdf-8a34-34a1f1fb99d5" 00:08:26.921 ], 00:08:26.921 "product_name": "Malloc disk", 00:08:26.922 "block_size": 512, 00:08:26.922 "num_blocks": 65536, 00:08:26.922 "uuid": "686c05fd-e19d-4fdf-8a34-34a1f1fb99d5", 00:08:26.922 "assigned_rate_limits": { 00:08:26.922 "rw_ios_per_sec": 0, 00:08:26.922 "rw_mbytes_per_sec": 0, 00:08:26.922 "r_mbytes_per_sec": 0, 00:08:26.922 "w_mbytes_per_sec": 0 00:08:26.922 }, 00:08:26.922 "claimed": false, 00:08:26.922 "zoned": false, 00:08:26.922 "supported_io_types": { 00:08:26.922 "read": true, 00:08:26.922 "write": true, 00:08:26.922 "unmap": true, 00:08:26.922 "flush": true, 00:08:26.922 "reset": true, 00:08:26.922 "nvme_admin": false, 00:08:26.922 "nvme_io": false, 00:08:26.922 "nvme_io_md": false, 00:08:26.922 "write_zeroes": true, 00:08:26.922 "zcopy": true, 00:08:26.922 "get_zone_info": false, 00:08:26.922 "zone_management": false, 00:08:26.922 "zone_append": false, 00:08:26.922 "compare": false, 00:08:26.922 "compare_and_write": false, 00:08:26.922 "abort": true, 00:08:26.922 "seek_hole": false, 00:08:26.922 "seek_data": false, 00:08:26.922 "copy": true, 00:08:26.922 "nvme_iov_md": false 00:08:26.922 }, 00:08:26.922 "memory_domains": [ 00:08:26.922 { 00:08:26.922 "dma_device_id": "system", 00:08:26.922 "dma_device_type": 1 00:08:26.922 }, 00:08:26.922 { 00:08:26.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.922 "dma_device_type": 2 00:08:26.922 } 00:08:26.922 ], 00:08:26.922 "driver_specific": {} 00:08:26.922 } 00:08:26.922 ] 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.922 [2024-10-15 01:09:39.618152] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:26.922 [2024-10-15 01:09:39.618267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:26.922 [2024-10-15 01:09:39.618315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:26.922 [2024-10-15 01:09:39.620156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.922 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.182 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.182 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.182 "name": "Existed_Raid", 00:08:27.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.182 "strip_size_kb": 64, 00:08:27.182 "state": "configuring", 00:08:27.182 "raid_level": "concat", 00:08:27.182 "superblock": false, 00:08:27.182 "num_base_bdevs": 3, 00:08:27.182 "num_base_bdevs_discovered": 2, 00:08:27.182 "num_base_bdevs_operational": 3, 00:08:27.182 "base_bdevs_list": [ 00:08:27.182 { 00:08:27.182 "name": "BaseBdev1", 00:08:27.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.182 "is_configured": false, 00:08:27.182 "data_offset": 0, 00:08:27.182 "data_size": 0 00:08:27.182 }, 00:08:27.182 { 00:08:27.182 "name": "BaseBdev2", 00:08:27.182 "uuid": "e508e15e-8f4f-4569-aefd-cb9bc5c37a35", 00:08:27.182 "is_configured": true, 00:08:27.182 "data_offset": 0, 00:08:27.182 "data_size": 65536 00:08:27.182 }, 00:08:27.182 { 00:08:27.182 "name": "BaseBdev3", 00:08:27.182 "uuid": "686c05fd-e19d-4fdf-8a34-34a1f1fb99d5", 00:08:27.182 "is_configured": true, 00:08:27.182 "data_offset": 0, 00:08:27.182 "data_size": 65536 00:08:27.182 } 00:08:27.182 ] 00:08:27.182 }' 00:08:27.182 01:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.182 01:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.442 [2024-10-15 01:09:40.053387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.442 "name": "Existed_Raid", 00:08:27.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.442 "strip_size_kb": 64, 00:08:27.442 "state": "configuring", 00:08:27.442 "raid_level": "concat", 00:08:27.442 "superblock": false, 00:08:27.442 "num_base_bdevs": 3, 00:08:27.442 "num_base_bdevs_discovered": 1, 00:08:27.442 "num_base_bdevs_operational": 3, 00:08:27.442 "base_bdevs_list": [ 00:08:27.442 { 00:08:27.442 "name": "BaseBdev1", 00:08:27.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.442 "is_configured": false, 00:08:27.442 "data_offset": 0, 00:08:27.442 "data_size": 0 00:08:27.442 }, 00:08:27.442 { 00:08:27.442 "name": null, 00:08:27.442 "uuid": "e508e15e-8f4f-4569-aefd-cb9bc5c37a35", 00:08:27.442 "is_configured": false, 00:08:27.442 "data_offset": 0, 00:08:27.442 "data_size": 65536 00:08:27.442 }, 00:08:27.442 { 00:08:27.442 "name": "BaseBdev3", 00:08:27.442 "uuid": "686c05fd-e19d-4fdf-8a34-34a1f1fb99d5", 00:08:27.442 "is_configured": true, 00:08:27.442 "data_offset": 0, 00:08:27.442 "data_size": 65536 00:08:27.442 } 00:08:27.442 ] 00:08:27.442 }' 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.442 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.011 [2024-10-15 01:09:40.531684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.011 BaseBdev1 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.011 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.011 [ 00:08:28.011 { 00:08:28.011 "name": "BaseBdev1", 00:08:28.011 "aliases": [ 00:08:28.011 "4a085a36-8387-4710-8410-6d2326b73db9" 00:08:28.011 ], 00:08:28.011 "product_name": "Malloc disk", 00:08:28.011 "block_size": 512, 00:08:28.011 "num_blocks": 65536, 00:08:28.011 "uuid": "4a085a36-8387-4710-8410-6d2326b73db9", 00:08:28.011 "assigned_rate_limits": { 00:08:28.011 "rw_ios_per_sec": 0, 00:08:28.011 "rw_mbytes_per_sec": 0, 00:08:28.011 "r_mbytes_per_sec": 0, 00:08:28.011 "w_mbytes_per_sec": 0 00:08:28.011 }, 00:08:28.011 "claimed": true, 00:08:28.011 "claim_type": "exclusive_write", 00:08:28.011 "zoned": false, 00:08:28.011 "supported_io_types": { 00:08:28.011 "read": true, 00:08:28.011 "write": true, 00:08:28.011 "unmap": true, 00:08:28.011 "flush": true, 00:08:28.011 "reset": true, 00:08:28.011 "nvme_admin": false, 00:08:28.011 "nvme_io": false, 00:08:28.011 "nvme_io_md": false, 00:08:28.011 "write_zeroes": true, 00:08:28.011 "zcopy": true, 00:08:28.011 "get_zone_info": false, 00:08:28.011 "zone_management": false, 00:08:28.011 "zone_append": false, 00:08:28.011 "compare": false, 00:08:28.011 "compare_and_write": false, 00:08:28.011 "abort": true, 00:08:28.011 "seek_hole": false, 00:08:28.011 "seek_data": false, 00:08:28.011 "copy": true, 00:08:28.011 "nvme_iov_md": false 00:08:28.011 }, 00:08:28.011 "memory_domains": [ 00:08:28.011 { 00:08:28.011 "dma_device_id": "system", 00:08:28.011 "dma_device_type": 1 00:08:28.011 }, 00:08:28.011 { 00:08:28.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.011 "dma_device_type": 2 00:08:28.012 } 00:08:28.012 ], 00:08:28.012 "driver_specific": {} 00:08:28.012 } 00:08:28.012 ] 00:08:28.012 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.012 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:28.012 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:28.012 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.012 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.012 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.012 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.012 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.012 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.012 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.012 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.012 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.012 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.012 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.012 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.012 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.012 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.012 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.012 "name": "Existed_Raid", 00:08:28.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.012 "strip_size_kb": 64, 00:08:28.012 "state": "configuring", 00:08:28.012 "raid_level": "concat", 00:08:28.012 "superblock": false, 00:08:28.012 "num_base_bdevs": 3, 00:08:28.012 "num_base_bdevs_discovered": 2, 00:08:28.012 "num_base_bdevs_operational": 3, 00:08:28.012 "base_bdevs_list": [ 00:08:28.012 { 00:08:28.012 "name": "BaseBdev1", 00:08:28.012 "uuid": "4a085a36-8387-4710-8410-6d2326b73db9", 00:08:28.012 "is_configured": true, 00:08:28.012 "data_offset": 0, 00:08:28.012 "data_size": 65536 00:08:28.012 }, 00:08:28.012 { 00:08:28.012 "name": null, 00:08:28.012 "uuid": "e508e15e-8f4f-4569-aefd-cb9bc5c37a35", 00:08:28.012 "is_configured": false, 00:08:28.012 "data_offset": 0, 00:08:28.012 "data_size": 65536 00:08:28.012 }, 00:08:28.012 { 00:08:28.012 "name": "BaseBdev3", 00:08:28.012 "uuid": "686c05fd-e19d-4fdf-8a34-34a1f1fb99d5", 00:08:28.012 "is_configured": true, 00:08:28.012 "data_offset": 0, 00:08:28.012 "data_size": 65536 00:08:28.012 } 00:08:28.012 ] 00:08:28.012 }' 00:08:28.012 01:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.012 01:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.580 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:28.580 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.580 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.580 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.580 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.580 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:28.580 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:28.580 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.580 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.580 [2024-10-15 01:09:41.042949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:28.580 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.580 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:28.581 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.581 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.581 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.581 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.581 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.581 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.581 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.581 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.581 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.581 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.581 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.581 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.581 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.581 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.581 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.581 "name": "Existed_Raid", 00:08:28.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.581 "strip_size_kb": 64, 00:08:28.581 "state": "configuring", 00:08:28.581 "raid_level": "concat", 00:08:28.581 "superblock": false, 00:08:28.581 "num_base_bdevs": 3, 00:08:28.581 "num_base_bdevs_discovered": 1, 00:08:28.581 "num_base_bdevs_operational": 3, 00:08:28.581 "base_bdevs_list": [ 00:08:28.581 { 00:08:28.581 "name": "BaseBdev1", 00:08:28.581 "uuid": "4a085a36-8387-4710-8410-6d2326b73db9", 00:08:28.581 "is_configured": true, 00:08:28.581 "data_offset": 0, 00:08:28.581 "data_size": 65536 00:08:28.581 }, 00:08:28.581 { 00:08:28.581 "name": null, 00:08:28.581 "uuid": "e508e15e-8f4f-4569-aefd-cb9bc5c37a35", 00:08:28.581 "is_configured": false, 00:08:28.581 "data_offset": 0, 00:08:28.581 "data_size": 65536 00:08:28.581 }, 00:08:28.581 { 00:08:28.581 "name": null, 00:08:28.581 "uuid": "686c05fd-e19d-4fdf-8a34-34a1f1fb99d5", 00:08:28.581 "is_configured": false, 00:08:28.581 "data_offset": 0, 00:08:28.581 "data_size": 65536 00:08:28.581 } 00:08:28.581 ] 00:08:28.581 }' 00:08:28.581 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.581 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.871 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.871 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:28.871 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.871 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.871 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.871 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:28.871 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:28.871 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.871 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.871 [2024-10-15 01:09:41.498184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:28.871 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.871 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:28.871 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.871 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.872 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.872 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.872 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.872 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.872 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.872 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.872 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.872 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.872 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.872 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.872 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.872 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.872 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.872 "name": "Existed_Raid", 00:08:28.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.872 "strip_size_kb": 64, 00:08:28.872 "state": "configuring", 00:08:28.872 "raid_level": "concat", 00:08:28.872 "superblock": false, 00:08:28.872 "num_base_bdevs": 3, 00:08:28.872 "num_base_bdevs_discovered": 2, 00:08:28.872 "num_base_bdevs_operational": 3, 00:08:28.872 "base_bdevs_list": [ 00:08:28.872 { 00:08:28.872 "name": "BaseBdev1", 00:08:28.872 "uuid": "4a085a36-8387-4710-8410-6d2326b73db9", 00:08:28.872 "is_configured": true, 00:08:28.872 "data_offset": 0, 00:08:28.872 "data_size": 65536 00:08:28.872 }, 00:08:28.872 { 00:08:28.872 "name": null, 00:08:28.872 "uuid": "e508e15e-8f4f-4569-aefd-cb9bc5c37a35", 00:08:28.872 "is_configured": false, 00:08:28.872 "data_offset": 0, 00:08:28.872 "data_size": 65536 00:08:28.872 }, 00:08:28.872 { 00:08:28.872 "name": "BaseBdev3", 00:08:28.872 "uuid": "686c05fd-e19d-4fdf-8a34-34a1f1fb99d5", 00:08:28.872 "is_configured": true, 00:08:28.872 "data_offset": 0, 00:08:28.872 "data_size": 65536 00:08:28.872 } 00:08:28.872 ] 00:08:28.872 }' 00:08:28.872 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.872 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.441 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.441 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.441 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.441 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:29.441 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.441 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:29.442 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:29.442 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.442 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.442 [2024-10-15 01:09:41.981369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:29.442 01:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.442 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:29.442 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.442 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.442 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.442 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.442 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.442 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.442 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.442 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.442 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.442 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.442 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.442 01:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.442 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.442 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.442 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.442 "name": "Existed_Raid", 00:08:29.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.442 "strip_size_kb": 64, 00:08:29.442 "state": "configuring", 00:08:29.442 "raid_level": "concat", 00:08:29.442 "superblock": false, 00:08:29.442 "num_base_bdevs": 3, 00:08:29.442 "num_base_bdevs_discovered": 1, 00:08:29.442 "num_base_bdevs_operational": 3, 00:08:29.442 "base_bdevs_list": [ 00:08:29.442 { 00:08:29.442 "name": null, 00:08:29.442 "uuid": "4a085a36-8387-4710-8410-6d2326b73db9", 00:08:29.442 "is_configured": false, 00:08:29.442 "data_offset": 0, 00:08:29.442 "data_size": 65536 00:08:29.442 }, 00:08:29.442 { 00:08:29.442 "name": null, 00:08:29.442 "uuid": "e508e15e-8f4f-4569-aefd-cb9bc5c37a35", 00:08:29.442 "is_configured": false, 00:08:29.442 "data_offset": 0, 00:08:29.442 "data_size": 65536 00:08:29.442 }, 00:08:29.442 { 00:08:29.442 "name": "BaseBdev3", 00:08:29.442 "uuid": "686c05fd-e19d-4fdf-8a34-34a1f1fb99d5", 00:08:29.442 "is_configured": true, 00:08:29.442 "data_offset": 0, 00:08:29.442 "data_size": 65536 00:08:29.442 } 00:08:29.442 ] 00:08:29.442 }' 00:08:29.442 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.442 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.011 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.011 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:30.011 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.011 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.011 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.011 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:30.011 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:30.011 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.012 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.012 [2024-10-15 01:09:42.487238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.012 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.012 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:30.012 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.012 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.012 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.012 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.012 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.012 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.012 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.012 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.012 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.012 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.012 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.012 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.012 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.012 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.012 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.012 "name": "Existed_Raid", 00:08:30.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.012 "strip_size_kb": 64, 00:08:30.012 "state": "configuring", 00:08:30.012 "raid_level": "concat", 00:08:30.012 "superblock": false, 00:08:30.012 "num_base_bdevs": 3, 00:08:30.012 "num_base_bdevs_discovered": 2, 00:08:30.012 "num_base_bdevs_operational": 3, 00:08:30.012 "base_bdevs_list": [ 00:08:30.012 { 00:08:30.012 "name": null, 00:08:30.012 "uuid": "4a085a36-8387-4710-8410-6d2326b73db9", 00:08:30.012 "is_configured": false, 00:08:30.012 "data_offset": 0, 00:08:30.012 "data_size": 65536 00:08:30.012 }, 00:08:30.012 { 00:08:30.012 "name": "BaseBdev2", 00:08:30.012 "uuid": "e508e15e-8f4f-4569-aefd-cb9bc5c37a35", 00:08:30.012 "is_configured": true, 00:08:30.012 "data_offset": 0, 00:08:30.012 "data_size": 65536 00:08:30.012 }, 00:08:30.012 { 00:08:30.012 "name": "BaseBdev3", 00:08:30.012 "uuid": "686c05fd-e19d-4fdf-8a34-34a1f1fb99d5", 00:08:30.012 "is_configured": true, 00:08:30.012 "data_offset": 0, 00:08:30.012 "data_size": 65536 00:08:30.012 } 00:08:30.012 ] 00:08:30.012 }' 00:08:30.012 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.012 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4a085a36-8387-4710-8410-6d2326b73db9 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.272 [2024-10-15 01:09:42.933449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:30.272 [2024-10-15 01:09:42.933569] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:30.272 [2024-10-15 01:09:42.933595] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:30.272 [2024-10-15 01:09:42.933853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:30.272 [2024-10-15 01:09:42.934006] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:30.272 [2024-10-15 01:09:42.934045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:30.272 [2024-10-15 01:09:42.934264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.272 NewBaseBdev 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.272 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:30.273 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.273 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.273 [ 00:08:30.273 { 00:08:30.273 "name": "NewBaseBdev", 00:08:30.273 "aliases": [ 00:08:30.273 "4a085a36-8387-4710-8410-6d2326b73db9" 00:08:30.273 ], 00:08:30.273 "product_name": "Malloc disk", 00:08:30.273 "block_size": 512, 00:08:30.273 "num_blocks": 65536, 00:08:30.273 "uuid": "4a085a36-8387-4710-8410-6d2326b73db9", 00:08:30.273 "assigned_rate_limits": { 00:08:30.273 "rw_ios_per_sec": 0, 00:08:30.273 "rw_mbytes_per_sec": 0, 00:08:30.273 "r_mbytes_per_sec": 0, 00:08:30.273 "w_mbytes_per_sec": 0 00:08:30.273 }, 00:08:30.273 "claimed": true, 00:08:30.273 "claim_type": "exclusive_write", 00:08:30.273 "zoned": false, 00:08:30.273 "supported_io_types": { 00:08:30.273 "read": true, 00:08:30.273 "write": true, 00:08:30.273 "unmap": true, 00:08:30.273 "flush": true, 00:08:30.273 "reset": true, 00:08:30.273 "nvme_admin": false, 00:08:30.273 "nvme_io": false, 00:08:30.273 "nvme_io_md": false, 00:08:30.273 "write_zeroes": true, 00:08:30.273 "zcopy": true, 00:08:30.273 "get_zone_info": false, 00:08:30.273 "zone_management": false, 00:08:30.273 "zone_append": false, 00:08:30.273 "compare": false, 00:08:30.273 "compare_and_write": false, 00:08:30.273 "abort": true, 00:08:30.273 "seek_hole": false, 00:08:30.273 "seek_data": false, 00:08:30.273 "copy": true, 00:08:30.273 "nvme_iov_md": false 00:08:30.273 }, 00:08:30.273 "memory_domains": [ 00:08:30.273 { 00:08:30.273 "dma_device_id": "system", 00:08:30.273 "dma_device_type": 1 00:08:30.273 }, 00:08:30.273 { 00:08:30.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.273 "dma_device_type": 2 00:08:30.273 } 00:08:30.273 ], 00:08:30.273 "driver_specific": {} 00:08:30.273 } 00:08:30.273 ] 00:08:30.273 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.273 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:30.273 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:30.273 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.273 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.273 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.273 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.273 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.273 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.273 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.273 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.273 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.273 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.273 01:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.273 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.273 01:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.533 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.533 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.533 "name": "Existed_Raid", 00:08:30.533 "uuid": "df22d440-2c9e-432f-9497-c684316b4142", 00:08:30.533 "strip_size_kb": 64, 00:08:30.533 "state": "online", 00:08:30.533 "raid_level": "concat", 00:08:30.533 "superblock": false, 00:08:30.533 "num_base_bdevs": 3, 00:08:30.533 "num_base_bdevs_discovered": 3, 00:08:30.533 "num_base_bdevs_operational": 3, 00:08:30.533 "base_bdevs_list": [ 00:08:30.533 { 00:08:30.533 "name": "NewBaseBdev", 00:08:30.533 "uuid": "4a085a36-8387-4710-8410-6d2326b73db9", 00:08:30.533 "is_configured": true, 00:08:30.533 "data_offset": 0, 00:08:30.533 "data_size": 65536 00:08:30.533 }, 00:08:30.533 { 00:08:30.533 "name": "BaseBdev2", 00:08:30.533 "uuid": "e508e15e-8f4f-4569-aefd-cb9bc5c37a35", 00:08:30.533 "is_configured": true, 00:08:30.533 "data_offset": 0, 00:08:30.533 "data_size": 65536 00:08:30.533 }, 00:08:30.533 { 00:08:30.533 "name": "BaseBdev3", 00:08:30.533 "uuid": "686c05fd-e19d-4fdf-8a34-34a1f1fb99d5", 00:08:30.533 "is_configured": true, 00:08:30.533 "data_offset": 0, 00:08:30.533 "data_size": 65536 00:08:30.533 } 00:08:30.533 ] 00:08:30.533 }' 00:08:30.533 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.533 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.793 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:30.793 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:30.793 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:30.793 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:30.793 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:30.793 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:30.793 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:30.793 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:30.793 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.793 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.793 [2024-10-15 01:09:43.392976] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.793 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.793 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:30.793 "name": "Existed_Raid", 00:08:30.793 "aliases": [ 00:08:30.793 "df22d440-2c9e-432f-9497-c684316b4142" 00:08:30.793 ], 00:08:30.793 "product_name": "Raid Volume", 00:08:30.793 "block_size": 512, 00:08:30.793 "num_blocks": 196608, 00:08:30.793 "uuid": "df22d440-2c9e-432f-9497-c684316b4142", 00:08:30.793 "assigned_rate_limits": { 00:08:30.793 "rw_ios_per_sec": 0, 00:08:30.793 "rw_mbytes_per_sec": 0, 00:08:30.793 "r_mbytes_per_sec": 0, 00:08:30.793 "w_mbytes_per_sec": 0 00:08:30.793 }, 00:08:30.793 "claimed": false, 00:08:30.793 "zoned": false, 00:08:30.793 "supported_io_types": { 00:08:30.793 "read": true, 00:08:30.793 "write": true, 00:08:30.793 "unmap": true, 00:08:30.793 "flush": true, 00:08:30.793 "reset": true, 00:08:30.793 "nvme_admin": false, 00:08:30.793 "nvme_io": false, 00:08:30.793 "nvme_io_md": false, 00:08:30.793 "write_zeroes": true, 00:08:30.793 "zcopy": false, 00:08:30.793 "get_zone_info": false, 00:08:30.793 "zone_management": false, 00:08:30.793 "zone_append": false, 00:08:30.793 "compare": false, 00:08:30.793 "compare_and_write": false, 00:08:30.793 "abort": false, 00:08:30.793 "seek_hole": false, 00:08:30.793 "seek_data": false, 00:08:30.793 "copy": false, 00:08:30.793 "nvme_iov_md": false 00:08:30.793 }, 00:08:30.793 "memory_domains": [ 00:08:30.793 { 00:08:30.793 "dma_device_id": "system", 00:08:30.793 "dma_device_type": 1 00:08:30.793 }, 00:08:30.793 { 00:08:30.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.793 "dma_device_type": 2 00:08:30.793 }, 00:08:30.793 { 00:08:30.793 "dma_device_id": "system", 00:08:30.793 "dma_device_type": 1 00:08:30.793 }, 00:08:30.793 { 00:08:30.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.793 "dma_device_type": 2 00:08:30.793 }, 00:08:30.793 { 00:08:30.793 "dma_device_id": "system", 00:08:30.793 "dma_device_type": 1 00:08:30.793 }, 00:08:30.793 { 00:08:30.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.793 "dma_device_type": 2 00:08:30.793 } 00:08:30.793 ], 00:08:30.793 "driver_specific": { 00:08:30.793 "raid": { 00:08:30.793 "uuid": "df22d440-2c9e-432f-9497-c684316b4142", 00:08:30.793 "strip_size_kb": 64, 00:08:30.793 "state": "online", 00:08:30.793 "raid_level": "concat", 00:08:30.793 "superblock": false, 00:08:30.793 "num_base_bdevs": 3, 00:08:30.793 "num_base_bdevs_discovered": 3, 00:08:30.793 "num_base_bdevs_operational": 3, 00:08:30.793 "base_bdevs_list": [ 00:08:30.793 { 00:08:30.793 "name": "NewBaseBdev", 00:08:30.793 "uuid": "4a085a36-8387-4710-8410-6d2326b73db9", 00:08:30.793 "is_configured": true, 00:08:30.793 "data_offset": 0, 00:08:30.793 "data_size": 65536 00:08:30.793 }, 00:08:30.793 { 00:08:30.793 "name": "BaseBdev2", 00:08:30.793 "uuid": "e508e15e-8f4f-4569-aefd-cb9bc5c37a35", 00:08:30.793 "is_configured": true, 00:08:30.793 "data_offset": 0, 00:08:30.793 "data_size": 65536 00:08:30.793 }, 00:08:30.793 { 00:08:30.793 "name": "BaseBdev3", 00:08:30.793 "uuid": "686c05fd-e19d-4fdf-8a34-34a1f1fb99d5", 00:08:30.793 "is_configured": true, 00:08:30.793 "data_offset": 0, 00:08:30.793 "data_size": 65536 00:08:30.793 } 00:08:30.793 ] 00:08:30.794 } 00:08:30.794 } 00:08:30.794 }' 00:08:30.794 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:30.794 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:30.794 BaseBdev2 00:08:30.794 BaseBdev3' 00:08:30.794 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.054 [2024-10-15 01:09:43.664250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:31.054 [2024-10-15 01:09:43.664275] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:31.054 [2024-10-15 01:09:43.664350] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.054 [2024-10-15 01:09:43.664401] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.054 [2024-10-15 01:09:43.664413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76498 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 76498 ']' 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 76498 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76498 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:31.054 killing process with pid 76498 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76498' 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 76498 00:08:31.054 [2024-10-15 01:09:43.704031] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:31.054 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 76498 00:08:31.054 [2024-10-15 01:09:43.735370] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.315 01:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:31.315 00:08:31.315 real 0m8.607s 00:08:31.315 user 0m14.771s 00:08:31.315 sys 0m1.689s 00:08:31.315 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.315 ************************************ 00:08:31.315 END TEST raid_state_function_test 00:08:31.315 ************************************ 00:08:31.315 01:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.315 01:09:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:31.315 01:09:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:31.315 01:09:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.315 01:09:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.315 ************************************ 00:08:31.315 START TEST raid_state_function_test_sb 00:08:31.315 ************************************ 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77097 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77097' 00:08:31.315 Process raid pid: 77097 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77097 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77097 ']' 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.315 01:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.575 [2024-10-15 01:09:44.111516] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:08:31.575 [2024-10-15 01:09:44.111725] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.575 [2024-10-15 01:09:44.256569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.575 [2024-10-15 01:09:44.284100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.835 [2024-10-15 01:09:44.326918] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.835 [2024-10-15 01:09:44.327050] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.404 [2024-10-15 01:09:44.936858] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:32.404 [2024-10-15 01:09:44.936975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:32.404 [2024-10-15 01:09:44.937006] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.404 [2024-10-15 01:09:44.937029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.404 [2024-10-15 01:09:44.937047] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:32.404 [2024-10-15 01:09:44.937069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.404 "name": "Existed_Raid", 00:08:32.404 "uuid": "a950f890-193b-4031-b7af-f34e4307e910", 00:08:32.404 "strip_size_kb": 64, 00:08:32.404 "state": "configuring", 00:08:32.404 "raid_level": "concat", 00:08:32.404 "superblock": true, 00:08:32.404 "num_base_bdevs": 3, 00:08:32.404 "num_base_bdevs_discovered": 0, 00:08:32.404 "num_base_bdevs_operational": 3, 00:08:32.404 "base_bdevs_list": [ 00:08:32.404 { 00:08:32.404 "name": "BaseBdev1", 00:08:32.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.404 "is_configured": false, 00:08:32.404 "data_offset": 0, 00:08:32.404 "data_size": 0 00:08:32.404 }, 00:08:32.404 { 00:08:32.404 "name": "BaseBdev2", 00:08:32.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.404 "is_configured": false, 00:08:32.404 "data_offset": 0, 00:08:32.404 "data_size": 0 00:08:32.404 }, 00:08:32.404 { 00:08:32.404 "name": "BaseBdev3", 00:08:32.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.404 "is_configured": false, 00:08:32.404 "data_offset": 0, 00:08:32.404 "data_size": 0 00:08:32.404 } 00:08:32.404 ] 00:08:32.404 }' 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.404 01:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.664 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:32.664 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.664 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.664 [2024-10-15 01:09:45.367977] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.664 [2024-10-15 01:09:45.368120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:32.664 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.664 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:32.664 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.664 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.664 [2024-10-15 01:09:45.379998] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:32.664 [2024-10-15 01:09:45.380079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:32.664 [2024-10-15 01:09:45.380090] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.664 [2024-10-15 01:09:45.380115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.664 [2024-10-15 01:09:45.380122] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:32.664 [2024-10-15 01:09:45.380131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:32.664 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.664 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:32.664 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.664 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.924 [2024-10-15 01:09:45.400939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.924 BaseBdev1 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.924 [ 00:08:32.924 { 00:08:32.924 "name": "BaseBdev1", 00:08:32.924 "aliases": [ 00:08:32.924 "f8342ff4-424a-4966-a966-76847e39f3b8" 00:08:32.924 ], 00:08:32.924 "product_name": "Malloc disk", 00:08:32.924 "block_size": 512, 00:08:32.924 "num_blocks": 65536, 00:08:32.924 "uuid": "f8342ff4-424a-4966-a966-76847e39f3b8", 00:08:32.924 "assigned_rate_limits": { 00:08:32.924 "rw_ios_per_sec": 0, 00:08:32.924 "rw_mbytes_per_sec": 0, 00:08:32.924 "r_mbytes_per_sec": 0, 00:08:32.924 "w_mbytes_per_sec": 0 00:08:32.924 }, 00:08:32.924 "claimed": true, 00:08:32.924 "claim_type": "exclusive_write", 00:08:32.924 "zoned": false, 00:08:32.924 "supported_io_types": { 00:08:32.924 "read": true, 00:08:32.924 "write": true, 00:08:32.924 "unmap": true, 00:08:32.924 "flush": true, 00:08:32.924 "reset": true, 00:08:32.924 "nvme_admin": false, 00:08:32.924 "nvme_io": false, 00:08:32.924 "nvme_io_md": false, 00:08:32.924 "write_zeroes": true, 00:08:32.924 "zcopy": true, 00:08:32.924 "get_zone_info": false, 00:08:32.924 "zone_management": false, 00:08:32.924 "zone_append": false, 00:08:32.924 "compare": false, 00:08:32.924 "compare_and_write": false, 00:08:32.924 "abort": true, 00:08:32.924 "seek_hole": false, 00:08:32.924 "seek_data": false, 00:08:32.924 "copy": true, 00:08:32.924 "nvme_iov_md": false 00:08:32.924 }, 00:08:32.924 "memory_domains": [ 00:08:32.924 { 00:08:32.924 "dma_device_id": "system", 00:08:32.924 "dma_device_type": 1 00:08:32.924 }, 00:08:32.924 { 00:08:32.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.924 "dma_device_type": 2 00:08:32.924 } 00:08:32.924 ], 00:08:32.924 "driver_specific": {} 00:08:32.924 } 00:08:32.924 ] 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.924 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.924 "name": "Existed_Raid", 00:08:32.924 "uuid": "9255bf57-5ef7-4f3e-933a-ef46010d8f13", 00:08:32.924 "strip_size_kb": 64, 00:08:32.924 "state": "configuring", 00:08:32.924 "raid_level": "concat", 00:08:32.924 "superblock": true, 00:08:32.924 "num_base_bdevs": 3, 00:08:32.924 "num_base_bdevs_discovered": 1, 00:08:32.924 "num_base_bdevs_operational": 3, 00:08:32.924 "base_bdevs_list": [ 00:08:32.924 { 00:08:32.924 "name": "BaseBdev1", 00:08:32.924 "uuid": "f8342ff4-424a-4966-a966-76847e39f3b8", 00:08:32.924 "is_configured": true, 00:08:32.924 "data_offset": 2048, 00:08:32.924 "data_size": 63488 00:08:32.924 }, 00:08:32.924 { 00:08:32.924 "name": "BaseBdev2", 00:08:32.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.924 "is_configured": false, 00:08:32.924 "data_offset": 0, 00:08:32.925 "data_size": 0 00:08:32.925 }, 00:08:32.925 { 00:08:32.925 "name": "BaseBdev3", 00:08:32.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.925 "is_configured": false, 00:08:32.925 "data_offset": 0, 00:08:32.925 "data_size": 0 00:08:32.925 } 00:08:32.925 ] 00:08:32.925 }' 00:08:32.925 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.925 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.184 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:33.184 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.184 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.184 [2024-10-15 01:09:45.816272] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.184 [2024-10-15 01:09:45.816377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:33.184 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.184 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.185 [2024-10-15 01:09:45.828325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.185 [2024-10-15 01:09:45.830210] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.185 [2024-10-15 01:09:45.830284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.185 [2024-10-15 01:09:45.830312] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:33.185 [2024-10-15 01:09:45.830335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.185 "name": "Existed_Raid", 00:08:33.185 "uuid": "7c32043c-a340-428c-bcda-e5a104b208c2", 00:08:33.185 "strip_size_kb": 64, 00:08:33.185 "state": "configuring", 00:08:33.185 "raid_level": "concat", 00:08:33.185 "superblock": true, 00:08:33.185 "num_base_bdevs": 3, 00:08:33.185 "num_base_bdevs_discovered": 1, 00:08:33.185 "num_base_bdevs_operational": 3, 00:08:33.185 "base_bdevs_list": [ 00:08:33.185 { 00:08:33.185 "name": "BaseBdev1", 00:08:33.185 "uuid": "f8342ff4-424a-4966-a966-76847e39f3b8", 00:08:33.185 "is_configured": true, 00:08:33.185 "data_offset": 2048, 00:08:33.185 "data_size": 63488 00:08:33.185 }, 00:08:33.185 { 00:08:33.185 "name": "BaseBdev2", 00:08:33.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.185 "is_configured": false, 00:08:33.185 "data_offset": 0, 00:08:33.185 "data_size": 0 00:08:33.185 }, 00:08:33.185 { 00:08:33.185 "name": "BaseBdev3", 00:08:33.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.185 "is_configured": false, 00:08:33.185 "data_offset": 0, 00:08:33.185 "data_size": 0 00:08:33.185 } 00:08:33.185 ] 00:08:33.185 }' 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.185 01:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.754 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:33.754 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.754 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.754 [2024-10-15 01:09:46.314581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:33.754 BaseBdev2 00:08:33.754 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.754 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:33.754 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:33.754 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:33.754 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:33.754 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:33.754 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:33.754 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:33.754 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.754 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.754 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.754 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:33.754 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.754 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.754 [ 00:08:33.754 { 00:08:33.754 "name": "BaseBdev2", 00:08:33.754 "aliases": [ 00:08:33.754 "7051f698-49b3-49c3-a4f4-97588db60330" 00:08:33.754 ], 00:08:33.754 "product_name": "Malloc disk", 00:08:33.754 "block_size": 512, 00:08:33.754 "num_blocks": 65536, 00:08:33.754 "uuid": "7051f698-49b3-49c3-a4f4-97588db60330", 00:08:33.754 "assigned_rate_limits": { 00:08:33.754 "rw_ios_per_sec": 0, 00:08:33.754 "rw_mbytes_per_sec": 0, 00:08:33.754 "r_mbytes_per_sec": 0, 00:08:33.754 "w_mbytes_per_sec": 0 00:08:33.755 }, 00:08:33.755 "claimed": true, 00:08:33.755 "claim_type": "exclusive_write", 00:08:33.755 "zoned": false, 00:08:33.755 "supported_io_types": { 00:08:33.755 "read": true, 00:08:33.755 "write": true, 00:08:33.755 "unmap": true, 00:08:33.755 "flush": true, 00:08:33.755 "reset": true, 00:08:33.755 "nvme_admin": false, 00:08:33.755 "nvme_io": false, 00:08:33.755 "nvme_io_md": false, 00:08:33.755 "write_zeroes": true, 00:08:33.755 "zcopy": true, 00:08:33.755 "get_zone_info": false, 00:08:33.755 "zone_management": false, 00:08:33.755 "zone_append": false, 00:08:33.755 "compare": false, 00:08:33.755 "compare_and_write": false, 00:08:33.755 "abort": true, 00:08:33.755 "seek_hole": false, 00:08:33.755 "seek_data": false, 00:08:33.755 "copy": true, 00:08:33.755 "nvme_iov_md": false 00:08:33.755 }, 00:08:33.755 "memory_domains": [ 00:08:33.755 { 00:08:33.755 "dma_device_id": "system", 00:08:33.755 "dma_device_type": 1 00:08:33.755 }, 00:08:33.755 { 00:08:33.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.755 "dma_device_type": 2 00:08:33.755 } 00:08:33.755 ], 00:08:33.755 "driver_specific": {} 00:08:33.755 } 00:08:33.755 ] 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.755 "name": "Existed_Raid", 00:08:33.755 "uuid": "7c32043c-a340-428c-bcda-e5a104b208c2", 00:08:33.755 "strip_size_kb": 64, 00:08:33.755 "state": "configuring", 00:08:33.755 "raid_level": "concat", 00:08:33.755 "superblock": true, 00:08:33.755 "num_base_bdevs": 3, 00:08:33.755 "num_base_bdevs_discovered": 2, 00:08:33.755 "num_base_bdevs_operational": 3, 00:08:33.755 "base_bdevs_list": [ 00:08:33.755 { 00:08:33.755 "name": "BaseBdev1", 00:08:33.755 "uuid": "f8342ff4-424a-4966-a966-76847e39f3b8", 00:08:33.755 "is_configured": true, 00:08:33.755 "data_offset": 2048, 00:08:33.755 "data_size": 63488 00:08:33.755 }, 00:08:33.755 { 00:08:33.755 "name": "BaseBdev2", 00:08:33.755 "uuid": "7051f698-49b3-49c3-a4f4-97588db60330", 00:08:33.755 "is_configured": true, 00:08:33.755 "data_offset": 2048, 00:08:33.755 "data_size": 63488 00:08:33.755 }, 00:08:33.755 { 00:08:33.755 "name": "BaseBdev3", 00:08:33.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.755 "is_configured": false, 00:08:33.755 "data_offset": 0, 00:08:33.755 "data_size": 0 00:08:33.755 } 00:08:33.755 ] 00:08:33.755 }' 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.755 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.325 [2024-10-15 01:09:46.770748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:34.325 [2024-10-15 01:09:46.770933] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:34.325 [2024-10-15 01:09:46.770957] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:34.325 [2024-10-15 01:09:46.771259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:34.325 [2024-10-15 01:09:46.771387] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:34.325 [2024-10-15 01:09:46.771402] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:34.325 [2024-10-15 01:09:46.771550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.325 BaseBdev3 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.325 [ 00:08:34.325 { 00:08:34.325 "name": "BaseBdev3", 00:08:34.325 "aliases": [ 00:08:34.325 "5ce7d5ba-825d-4213-9431-0757069356ae" 00:08:34.325 ], 00:08:34.325 "product_name": "Malloc disk", 00:08:34.325 "block_size": 512, 00:08:34.325 "num_blocks": 65536, 00:08:34.325 "uuid": "5ce7d5ba-825d-4213-9431-0757069356ae", 00:08:34.325 "assigned_rate_limits": { 00:08:34.325 "rw_ios_per_sec": 0, 00:08:34.325 "rw_mbytes_per_sec": 0, 00:08:34.325 "r_mbytes_per_sec": 0, 00:08:34.325 "w_mbytes_per_sec": 0 00:08:34.325 }, 00:08:34.325 "claimed": true, 00:08:34.325 "claim_type": "exclusive_write", 00:08:34.325 "zoned": false, 00:08:34.325 "supported_io_types": { 00:08:34.325 "read": true, 00:08:34.325 "write": true, 00:08:34.325 "unmap": true, 00:08:34.325 "flush": true, 00:08:34.325 "reset": true, 00:08:34.325 "nvme_admin": false, 00:08:34.325 "nvme_io": false, 00:08:34.325 "nvme_io_md": false, 00:08:34.325 "write_zeroes": true, 00:08:34.325 "zcopy": true, 00:08:34.325 "get_zone_info": false, 00:08:34.325 "zone_management": false, 00:08:34.325 "zone_append": false, 00:08:34.325 "compare": false, 00:08:34.325 "compare_and_write": false, 00:08:34.325 "abort": true, 00:08:34.325 "seek_hole": false, 00:08:34.325 "seek_data": false, 00:08:34.325 "copy": true, 00:08:34.325 "nvme_iov_md": false 00:08:34.325 }, 00:08:34.325 "memory_domains": [ 00:08:34.325 { 00:08:34.325 "dma_device_id": "system", 00:08:34.325 "dma_device_type": 1 00:08:34.325 }, 00:08:34.325 { 00:08:34.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.325 "dma_device_type": 2 00:08:34.325 } 00:08:34.325 ], 00:08:34.325 "driver_specific": {} 00:08:34.325 } 00:08:34.325 ] 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.325 "name": "Existed_Raid", 00:08:34.325 "uuid": "7c32043c-a340-428c-bcda-e5a104b208c2", 00:08:34.325 "strip_size_kb": 64, 00:08:34.325 "state": "online", 00:08:34.325 "raid_level": "concat", 00:08:34.325 "superblock": true, 00:08:34.325 "num_base_bdevs": 3, 00:08:34.325 "num_base_bdevs_discovered": 3, 00:08:34.325 "num_base_bdevs_operational": 3, 00:08:34.325 "base_bdevs_list": [ 00:08:34.325 { 00:08:34.325 "name": "BaseBdev1", 00:08:34.325 "uuid": "f8342ff4-424a-4966-a966-76847e39f3b8", 00:08:34.325 "is_configured": true, 00:08:34.325 "data_offset": 2048, 00:08:34.325 "data_size": 63488 00:08:34.325 }, 00:08:34.325 { 00:08:34.325 "name": "BaseBdev2", 00:08:34.325 "uuid": "7051f698-49b3-49c3-a4f4-97588db60330", 00:08:34.325 "is_configured": true, 00:08:34.325 "data_offset": 2048, 00:08:34.325 "data_size": 63488 00:08:34.325 }, 00:08:34.325 { 00:08:34.325 "name": "BaseBdev3", 00:08:34.325 "uuid": "5ce7d5ba-825d-4213-9431-0757069356ae", 00:08:34.325 "is_configured": true, 00:08:34.325 "data_offset": 2048, 00:08:34.325 "data_size": 63488 00:08:34.325 } 00:08:34.325 ] 00:08:34.325 }' 00:08:34.325 01:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.326 01:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.585 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:34.585 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:34.585 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:34.585 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:34.585 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:34.585 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:34.585 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:34.585 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:34.585 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.585 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.585 [2024-10-15 01:09:47.266301] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.585 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.585 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:34.585 "name": "Existed_Raid", 00:08:34.585 "aliases": [ 00:08:34.585 "7c32043c-a340-428c-bcda-e5a104b208c2" 00:08:34.585 ], 00:08:34.585 "product_name": "Raid Volume", 00:08:34.585 "block_size": 512, 00:08:34.585 "num_blocks": 190464, 00:08:34.585 "uuid": "7c32043c-a340-428c-bcda-e5a104b208c2", 00:08:34.585 "assigned_rate_limits": { 00:08:34.585 "rw_ios_per_sec": 0, 00:08:34.585 "rw_mbytes_per_sec": 0, 00:08:34.585 "r_mbytes_per_sec": 0, 00:08:34.585 "w_mbytes_per_sec": 0 00:08:34.585 }, 00:08:34.585 "claimed": false, 00:08:34.585 "zoned": false, 00:08:34.585 "supported_io_types": { 00:08:34.586 "read": true, 00:08:34.586 "write": true, 00:08:34.586 "unmap": true, 00:08:34.586 "flush": true, 00:08:34.586 "reset": true, 00:08:34.586 "nvme_admin": false, 00:08:34.586 "nvme_io": false, 00:08:34.586 "nvme_io_md": false, 00:08:34.586 "write_zeroes": true, 00:08:34.586 "zcopy": false, 00:08:34.586 "get_zone_info": false, 00:08:34.586 "zone_management": false, 00:08:34.586 "zone_append": false, 00:08:34.586 "compare": false, 00:08:34.586 "compare_and_write": false, 00:08:34.586 "abort": false, 00:08:34.586 "seek_hole": false, 00:08:34.586 "seek_data": false, 00:08:34.586 "copy": false, 00:08:34.586 "nvme_iov_md": false 00:08:34.586 }, 00:08:34.586 "memory_domains": [ 00:08:34.586 { 00:08:34.586 "dma_device_id": "system", 00:08:34.586 "dma_device_type": 1 00:08:34.586 }, 00:08:34.586 { 00:08:34.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.586 "dma_device_type": 2 00:08:34.586 }, 00:08:34.586 { 00:08:34.586 "dma_device_id": "system", 00:08:34.586 "dma_device_type": 1 00:08:34.586 }, 00:08:34.586 { 00:08:34.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.586 "dma_device_type": 2 00:08:34.586 }, 00:08:34.586 { 00:08:34.586 "dma_device_id": "system", 00:08:34.586 "dma_device_type": 1 00:08:34.586 }, 00:08:34.586 { 00:08:34.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.586 "dma_device_type": 2 00:08:34.586 } 00:08:34.586 ], 00:08:34.586 "driver_specific": { 00:08:34.586 "raid": { 00:08:34.586 "uuid": "7c32043c-a340-428c-bcda-e5a104b208c2", 00:08:34.586 "strip_size_kb": 64, 00:08:34.586 "state": "online", 00:08:34.586 "raid_level": "concat", 00:08:34.586 "superblock": true, 00:08:34.586 "num_base_bdevs": 3, 00:08:34.586 "num_base_bdevs_discovered": 3, 00:08:34.586 "num_base_bdevs_operational": 3, 00:08:34.586 "base_bdevs_list": [ 00:08:34.586 { 00:08:34.586 "name": "BaseBdev1", 00:08:34.586 "uuid": "f8342ff4-424a-4966-a966-76847e39f3b8", 00:08:34.586 "is_configured": true, 00:08:34.586 "data_offset": 2048, 00:08:34.586 "data_size": 63488 00:08:34.586 }, 00:08:34.586 { 00:08:34.586 "name": "BaseBdev2", 00:08:34.586 "uuid": "7051f698-49b3-49c3-a4f4-97588db60330", 00:08:34.586 "is_configured": true, 00:08:34.586 "data_offset": 2048, 00:08:34.586 "data_size": 63488 00:08:34.586 }, 00:08:34.586 { 00:08:34.586 "name": "BaseBdev3", 00:08:34.586 "uuid": "5ce7d5ba-825d-4213-9431-0757069356ae", 00:08:34.586 "is_configured": true, 00:08:34.586 "data_offset": 2048, 00:08:34.586 "data_size": 63488 00:08:34.586 } 00:08:34.586 ] 00:08:34.586 } 00:08:34.586 } 00:08:34.586 }' 00:08:34.586 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:34.846 BaseBdev2 00:08:34.846 BaseBdev3' 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.846 [2024-10-15 01:09:47.529575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:34.846 [2024-10-15 01:09:47.529611] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:34.846 [2024-10-15 01:09:47.529670] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.846 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.105 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.105 "name": "Existed_Raid", 00:08:35.105 "uuid": "7c32043c-a340-428c-bcda-e5a104b208c2", 00:08:35.105 "strip_size_kb": 64, 00:08:35.105 "state": "offline", 00:08:35.105 "raid_level": "concat", 00:08:35.105 "superblock": true, 00:08:35.105 "num_base_bdevs": 3, 00:08:35.105 "num_base_bdevs_discovered": 2, 00:08:35.105 "num_base_bdevs_operational": 2, 00:08:35.105 "base_bdevs_list": [ 00:08:35.105 { 00:08:35.105 "name": null, 00:08:35.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.105 "is_configured": false, 00:08:35.105 "data_offset": 0, 00:08:35.105 "data_size": 63488 00:08:35.105 }, 00:08:35.105 { 00:08:35.105 "name": "BaseBdev2", 00:08:35.105 "uuid": "7051f698-49b3-49c3-a4f4-97588db60330", 00:08:35.105 "is_configured": true, 00:08:35.105 "data_offset": 2048, 00:08:35.105 "data_size": 63488 00:08:35.105 }, 00:08:35.105 { 00:08:35.105 "name": "BaseBdev3", 00:08:35.105 "uuid": "5ce7d5ba-825d-4213-9431-0757069356ae", 00:08:35.105 "is_configured": true, 00:08:35.105 "data_offset": 2048, 00:08:35.105 "data_size": 63488 00:08:35.105 } 00:08:35.105 ] 00:08:35.105 }' 00:08:35.105 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.105 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.365 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:35.365 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:35.365 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.365 01:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:35.365 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.365 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.365 01:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.365 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:35.365 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:35.365 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:35.365 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.365 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.365 [2024-10-15 01:09:48.020253] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:35.365 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.365 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:35.365 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:35.365 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.365 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.365 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.365 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:35.365 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.365 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:35.365 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:35.365 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:35.365 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.365 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.624 [2024-10-15 01:09:48.091727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:35.624 [2024-10-15 01:09:48.091775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.624 BaseBdev2 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.624 [ 00:08:35.624 { 00:08:35.624 "name": "BaseBdev2", 00:08:35.624 "aliases": [ 00:08:35.624 "5d5f5778-ebb6-4823-941f-034ba4f360e8" 00:08:35.624 ], 00:08:35.624 "product_name": "Malloc disk", 00:08:35.624 "block_size": 512, 00:08:35.624 "num_blocks": 65536, 00:08:35.624 "uuid": "5d5f5778-ebb6-4823-941f-034ba4f360e8", 00:08:35.624 "assigned_rate_limits": { 00:08:35.624 "rw_ios_per_sec": 0, 00:08:35.624 "rw_mbytes_per_sec": 0, 00:08:35.624 "r_mbytes_per_sec": 0, 00:08:35.624 "w_mbytes_per_sec": 0 00:08:35.624 }, 00:08:35.624 "claimed": false, 00:08:35.624 "zoned": false, 00:08:35.624 "supported_io_types": { 00:08:35.624 "read": true, 00:08:35.624 "write": true, 00:08:35.624 "unmap": true, 00:08:35.624 "flush": true, 00:08:35.624 "reset": true, 00:08:35.624 "nvme_admin": false, 00:08:35.624 "nvme_io": false, 00:08:35.624 "nvme_io_md": false, 00:08:35.624 "write_zeroes": true, 00:08:35.624 "zcopy": true, 00:08:35.624 "get_zone_info": false, 00:08:35.624 "zone_management": false, 00:08:35.624 "zone_append": false, 00:08:35.624 "compare": false, 00:08:35.624 "compare_and_write": false, 00:08:35.624 "abort": true, 00:08:35.624 "seek_hole": false, 00:08:35.624 "seek_data": false, 00:08:35.624 "copy": true, 00:08:35.624 "nvme_iov_md": false 00:08:35.624 }, 00:08:35.624 "memory_domains": [ 00:08:35.624 { 00:08:35.624 "dma_device_id": "system", 00:08:35.624 "dma_device_type": 1 00:08:35.624 }, 00:08:35.624 { 00:08:35.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.624 "dma_device_type": 2 00:08:35.624 } 00:08:35.624 ], 00:08:35.624 "driver_specific": {} 00:08:35.624 } 00:08:35.624 ] 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.624 BaseBdev3 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.624 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.624 [ 00:08:35.624 { 00:08:35.624 "name": "BaseBdev3", 00:08:35.624 "aliases": [ 00:08:35.624 "5d00a3a7-d7a1-4dbc-befd-02b8d28efd29" 00:08:35.624 ], 00:08:35.624 "product_name": "Malloc disk", 00:08:35.624 "block_size": 512, 00:08:35.624 "num_blocks": 65536, 00:08:35.624 "uuid": "5d00a3a7-d7a1-4dbc-befd-02b8d28efd29", 00:08:35.624 "assigned_rate_limits": { 00:08:35.624 "rw_ios_per_sec": 0, 00:08:35.624 "rw_mbytes_per_sec": 0, 00:08:35.624 "r_mbytes_per_sec": 0, 00:08:35.624 "w_mbytes_per_sec": 0 00:08:35.624 }, 00:08:35.624 "claimed": false, 00:08:35.624 "zoned": false, 00:08:35.624 "supported_io_types": { 00:08:35.624 "read": true, 00:08:35.624 "write": true, 00:08:35.624 "unmap": true, 00:08:35.624 "flush": true, 00:08:35.624 "reset": true, 00:08:35.624 "nvme_admin": false, 00:08:35.624 "nvme_io": false, 00:08:35.624 "nvme_io_md": false, 00:08:35.624 "write_zeroes": true, 00:08:35.624 "zcopy": true, 00:08:35.624 "get_zone_info": false, 00:08:35.624 "zone_management": false, 00:08:35.624 "zone_append": false, 00:08:35.624 "compare": false, 00:08:35.624 "compare_and_write": false, 00:08:35.624 "abort": true, 00:08:35.624 "seek_hole": false, 00:08:35.625 "seek_data": false, 00:08:35.625 "copy": true, 00:08:35.625 "nvme_iov_md": false 00:08:35.625 }, 00:08:35.625 "memory_domains": [ 00:08:35.625 { 00:08:35.625 "dma_device_id": "system", 00:08:35.625 "dma_device_type": 1 00:08:35.625 }, 00:08:35.625 { 00:08:35.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.625 "dma_device_type": 2 00:08:35.625 } 00:08:35.625 ], 00:08:35.625 "driver_specific": {} 00:08:35.625 } 00:08:35.625 ] 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.625 [2024-10-15 01:09:48.272072] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:35.625 [2024-10-15 01:09:48.272115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:35.625 [2024-10-15 01:09:48.272134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.625 [2024-10-15 01:09:48.273954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.625 "name": "Existed_Raid", 00:08:35.625 "uuid": "738bd652-6160-4ee0-8634-51595c318b3d", 00:08:35.625 "strip_size_kb": 64, 00:08:35.625 "state": "configuring", 00:08:35.625 "raid_level": "concat", 00:08:35.625 "superblock": true, 00:08:35.625 "num_base_bdevs": 3, 00:08:35.625 "num_base_bdevs_discovered": 2, 00:08:35.625 "num_base_bdevs_operational": 3, 00:08:35.625 "base_bdevs_list": [ 00:08:35.625 { 00:08:35.625 "name": "BaseBdev1", 00:08:35.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.625 "is_configured": false, 00:08:35.625 "data_offset": 0, 00:08:35.625 "data_size": 0 00:08:35.625 }, 00:08:35.625 { 00:08:35.625 "name": "BaseBdev2", 00:08:35.625 "uuid": "5d5f5778-ebb6-4823-941f-034ba4f360e8", 00:08:35.625 "is_configured": true, 00:08:35.625 "data_offset": 2048, 00:08:35.625 "data_size": 63488 00:08:35.625 }, 00:08:35.625 { 00:08:35.625 "name": "BaseBdev3", 00:08:35.625 "uuid": "5d00a3a7-d7a1-4dbc-befd-02b8d28efd29", 00:08:35.625 "is_configured": true, 00:08:35.625 "data_offset": 2048, 00:08:35.625 "data_size": 63488 00:08:35.625 } 00:08:35.625 ] 00:08:35.625 }' 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.625 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.194 [2024-10-15 01:09:48.679418] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.194 "name": "Existed_Raid", 00:08:36.194 "uuid": "738bd652-6160-4ee0-8634-51595c318b3d", 00:08:36.194 "strip_size_kb": 64, 00:08:36.194 "state": "configuring", 00:08:36.194 "raid_level": "concat", 00:08:36.194 "superblock": true, 00:08:36.194 "num_base_bdevs": 3, 00:08:36.194 "num_base_bdevs_discovered": 1, 00:08:36.194 "num_base_bdevs_operational": 3, 00:08:36.194 "base_bdevs_list": [ 00:08:36.194 { 00:08:36.194 "name": "BaseBdev1", 00:08:36.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.194 "is_configured": false, 00:08:36.194 "data_offset": 0, 00:08:36.194 "data_size": 0 00:08:36.194 }, 00:08:36.194 { 00:08:36.194 "name": null, 00:08:36.194 "uuid": "5d5f5778-ebb6-4823-941f-034ba4f360e8", 00:08:36.194 "is_configured": false, 00:08:36.194 "data_offset": 0, 00:08:36.194 "data_size": 63488 00:08:36.194 }, 00:08:36.194 { 00:08:36.194 "name": "BaseBdev3", 00:08:36.194 "uuid": "5d00a3a7-d7a1-4dbc-befd-02b8d28efd29", 00:08:36.194 "is_configured": true, 00:08:36.194 "data_offset": 2048, 00:08:36.194 "data_size": 63488 00:08:36.194 } 00:08:36.194 ] 00:08:36.194 }' 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.194 01:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.454 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:36.454 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.454 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.454 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.454 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.454 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:36.454 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:36.454 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.454 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.454 [2024-10-15 01:09:49.085854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.454 BaseBdev1 00:08:36.454 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.454 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:36.454 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:36.454 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:36.454 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:36.454 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:36.454 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:36.454 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.455 [ 00:08:36.455 { 00:08:36.455 "name": "BaseBdev1", 00:08:36.455 "aliases": [ 00:08:36.455 "47927af3-0c71-40e9-84ec-32bac0030f9e" 00:08:36.455 ], 00:08:36.455 "product_name": "Malloc disk", 00:08:36.455 "block_size": 512, 00:08:36.455 "num_blocks": 65536, 00:08:36.455 "uuid": "47927af3-0c71-40e9-84ec-32bac0030f9e", 00:08:36.455 "assigned_rate_limits": { 00:08:36.455 "rw_ios_per_sec": 0, 00:08:36.455 "rw_mbytes_per_sec": 0, 00:08:36.455 "r_mbytes_per_sec": 0, 00:08:36.455 "w_mbytes_per_sec": 0 00:08:36.455 }, 00:08:36.455 "claimed": true, 00:08:36.455 "claim_type": "exclusive_write", 00:08:36.455 "zoned": false, 00:08:36.455 "supported_io_types": { 00:08:36.455 "read": true, 00:08:36.455 "write": true, 00:08:36.455 "unmap": true, 00:08:36.455 "flush": true, 00:08:36.455 "reset": true, 00:08:36.455 "nvme_admin": false, 00:08:36.455 "nvme_io": false, 00:08:36.455 "nvme_io_md": false, 00:08:36.455 "write_zeroes": true, 00:08:36.455 "zcopy": true, 00:08:36.455 "get_zone_info": false, 00:08:36.455 "zone_management": false, 00:08:36.455 "zone_append": false, 00:08:36.455 "compare": false, 00:08:36.455 "compare_and_write": false, 00:08:36.455 "abort": true, 00:08:36.455 "seek_hole": false, 00:08:36.455 "seek_data": false, 00:08:36.455 "copy": true, 00:08:36.455 "nvme_iov_md": false 00:08:36.455 }, 00:08:36.455 "memory_domains": [ 00:08:36.455 { 00:08:36.455 "dma_device_id": "system", 00:08:36.455 "dma_device_type": 1 00:08:36.455 }, 00:08:36.455 { 00:08:36.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.455 "dma_device_type": 2 00:08:36.455 } 00:08:36.455 ], 00:08:36.455 "driver_specific": {} 00:08:36.455 } 00:08:36.455 ] 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.455 "name": "Existed_Raid", 00:08:36.455 "uuid": "738bd652-6160-4ee0-8634-51595c318b3d", 00:08:36.455 "strip_size_kb": 64, 00:08:36.455 "state": "configuring", 00:08:36.455 "raid_level": "concat", 00:08:36.455 "superblock": true, 00:08:36.455 "num_base_bdevs": 3, 00:08:36.455 "num_base_bdevs_discovered": 2, 00:08:36.455 "num_base_bdevs_operational": 3, 00:08:36.455 "base_bdevs_list": [ 00:08:36.455 { 00:08:36.455 "name": "BaseBdev1", 00:08:36.455 "uuid": "47927af3-0c71-40e9-84ec-32bac0030f9e", 00:08:36.455 "is_configured": true, 00:08:36.455 "data_offset": 2048, 00:08:36.455 "data_size": 63488 00:08:36.455 }, 00:08:36.455 { 00:08:36.455 "name": null, 00:08:36.455 "uuid": "5d5f5778-ebb6-4823-941f-034ba4f360e8", 00:08:36.455 "is_configured": false, 00:08:36.455 "data_offset": 0, 00:08:36.455 "data_size": 63488 00:08:36.455 }, 00:08:36.455 { 00:08:36.455 "name": "BaseBdev3", 00:08:36.455 "uuid": "5d00a3a7-d7a1-4dbc-befd-02b8d28efd29", 00:08:36.455 "is_configured": true, 00:08:36.455 "data_offset": 2048, 00:08:36.455 "data_size": 63488 00:08:36.455 } 00:08:36.455 ] 00:08:36.455 }' 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.455 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.024 [2024-10-15 01:09:49.609016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.024 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.024 "name": "Existed_Raid", 00:08:37.024 "uuid": "738bd652-6160-4ee0-8634-51595c318b3d", 00:08:37.024 "strip_size_kb": 64, 00:08:37.024 "state": "configuring", 00:08:37.024 "raid_level": "concat", 00:08:37.024 "superblock": true, 00:08:37.024 "num_base_bdevs": 3, 00:08:37.024 "num_base_bdevs_discovered": 1, 00:08:37.024 "num_base_bdevs_operational": 3, 00:08:37.024 "base_bdevs_list": [ 00:08:37.024 { 00:08:37.024 "name": "BaseBdev1", 00:08:37.024 "uuid": "47927af3-0c71-40e9-84ec-32bac0030f9e", 00:08:37.024 "is_configured": true, 00:08:37.025 "data_offset": 2048, 00:08:37.025 "data_size": 63488 00:08:37.025 }, 00:08:37.025 { 00:08:37.025 "name": null, 00:08:37.025 "uuid": "5d5f5778-ebb6-4823-941f-034ba4f360e8", 00:08:37.025 "is_configured": false, 00:08:37.025 "data_offset": 0, 00:08:37.025 "data_size": 63488 00:08:37.025 }, 00:08:37.025 { 00:08:37.025 "name": null, 00:08:37.025 "uuid": "5d00a3a7-d7a1-4dbc-befd-02b8d28efd29", 00:08:37.025 "is_configured": false, 00:08:37.025 "data_offset": 0, 00:08:37.025 "data_size": 63488 00:08:37.025 } 00:08:37.025 ] 00:08:37.025 }' 00:08:37.025 01:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.025 01:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.593 [2024-10-15 01:09:50.132145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.593 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.593 "name": "Existed_Raid", 00:08:37.593 "uuid": "738bd652-6160-4ee0-8634-51595c318b3d", 00:08:37.593 "strip_size_kb": 64, 00:08:37.593 "state": "configuring", 00:08:37.593 "raid_level": "concat", 00:08:37.593 "superblock": true, 00:08:37.593 "num_base_bdevs": 3, 00:08:37.593 "num_base_bdevs_discovered": 2, 00:08:37.593 "num_base_bdevs_operational": 3, 00:08:37.593 "base_bdevs_list": [ 00:08:37.593 { 00:08:37.593 "name": "BaseBdev1", 00:08:37.593 "uuid": "47927af3-0c71-40e9-84ec-32bac0030f9e", 00:08:37.593 "is_configured": true, 00:08:37.593 "data_offset": 2048, 00:08:37.593 "data_size": 63488 00:08:37.593 }, 00:08:37.593 { 00:08:37.593 "name": null, 00:08:37.593 "uuid": "5d5f5778-ebb6-4823-941f-034ba4f360e8", 00:08:37.593 "is_configured": false, 00:08:37.593 "data_offset": 0, 00:08:37.593 "data_size": 63488 00:08:37.593 }, 00:08:37.593 { 00:08:37.593 "name": "BaseBdev3", 00:08:37.593 "uuid": "5d00a3a7-d7a1-4dbc-befd-02b8d28efd29", 00:08:37.594 "is_configured": true, 00:08:37.594 "data_offset": 2048, 00:08:37.594 "data_size": 63488 00:08:37.594 } 00:08:37.594 ] 00:08:37.594 }' 00:08:37.594 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.594 01:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.161 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.161 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:38.161 01:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.161 01:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.161 01:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.161 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:38.161 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:38.161 01:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.161 01:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.161 [2024-10-15 01:09:50.631433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:38.162 01:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.162 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.162 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.162 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.162 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.162 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.162 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.162 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.162 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.162 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.162 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.162 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.162 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.162 01:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.162 01:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.162 01:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.162 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.162 "name": "Existed_Raid", 00:08:38.162 "uuid": "738bd652-6160-4ee0-8634-51595c318b3d", 00:08:38.162 "strip_size_kb": 64, 00:08:38.162 "state": "configuring", 00:08:38.162 "raid_level": "concat", 00:08:38.162 "superblock": true, 00:08:38.162 "num_base_bdevs": 3, 00:08:38.162 "num_base_bdevs_discovered": 1, 00:08:38.162 "num_base_bdevs_operational": 3, 00:08:38.162 "base_bdevs_list": [ 00:08:38.162 { 00:08:38.162 "name": null, 00:08:38.162 "uuid": "47927af3-0c71-40e9-84ec-32bac0030f9e", 00:08:38.162 "is_configured": false, 00:08:38.162 "data_offset": 0, 00:08:38.162 "data_size": 63488 00:08:38.162 }, 00:08:38.162 { 00:08:38.162 "name": null, 00:08:38.162 "uuid": "5d5f5778-ebb6-4823-941f-034ba4f360e8", 00:08:38.162 "is_configured": false, 00:08:38.162 "data_offset": 0, 00:08:38.162 "data_size": 63488 00:08:38.162 }, 00:08:38.162 { 00:08:38.162 "name": "BaseBdev3", 00:08:38.162 "uuid": "5d00a3a7-d7a1-4dbc-befd-02b8d28efd29", 00:08:38.162 "is_configured": true, 00:08:38.162 "data_offset": 2048, 00:08:38.162 "data_size": 63488 00:08:38.162 } 00:08:38.162 ] 00:08:38.162 }' 00:08:38.162 01:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.162 01:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.420 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.420 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.420 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.420 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:38.420 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.420 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:38.420 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:38.420 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.420 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.420 [2024-10-15 01:09:51.141261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:38.679 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.679 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.679 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.679 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.679 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.679 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.679 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.679 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.679 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.679 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.679 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.679 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.679 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.679 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.679 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.679 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.679 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.679 "name": "Existed_Raid", 00:08:38.679 "uuid": "738bd652-6160-4ee0-8634-51595c318b3d", 00:08:38.679 "strip_size_kb": 64, 00:08:38.679 "state": "configuring", 00:08:38.679 "raid_level": "concat", 00:08:38.679 "superblock": true, 00:08:38.679 "num_base_bdevs": 3, 00:08:38.679 "num_base_bdevs_discovered": 2, 00:08:38.679 "num_base_bdevs_operational": 3, 00:08:38.679 "base_bdevs_list": [ 00:08:38.679 { 00:08:38.679 "name": null, 00:08:38.679 "uuid": "47927af3-0c71-40e9-84ec-32bac0030f9e", 00:08:38.679 "is_configured": false, 00:08:38.679 "data_offset": 0, 00:08:38.679 "data_size": 63488 00:08:38.679 }, 00:08:38.679 { 00:08:38.679 "name": "BaseBdev2", 00:08:38.679 "uuid": "5d5f5778-ebb6-4823-941f-034ba4f360e8", 00:08:38.679 "is_configured": true, 00:08:38.679 "data_offset": 2048, 00:08:38.679 "data_size": 63488 00:08:38.679 }, 00:08:38.679 { 00:08:38.679 "name": "BaseBdev3", 00:08:38.679 "uuid": "5d00a3a7-d7a1-4dbc-befd-02b8d28efd29", 00:08:38.679 "is_configured": true, 00:08:38.679 "data_offset": 2048, 00:08:38.679 "data_size": 63488 00:08:38.679 } 00:08:38.679 ] 00:08:38.679 }' 00:08:38.679 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.679 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.946 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.946 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:38.946 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.946 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.946 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.946 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:38.946 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.946 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.946 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:38.946 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.946 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.205 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 47927af3-0c71-40e9-84ec-32bac0030f9e 00:08:39.205 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.205 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.205 [2024-10-15 01:09:51.695314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:39.205 [2024-10-15 01:09:51.695576] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:39.205 [2024-10-15 01:09:51.695597] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:39.205 [2024-10-15 01:09:51.695884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:39.205 [2024-10-15 01:09:51.696010] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:39.205 [2024-10-15 01:09:51.696021] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:39.205 NewBaseBdev 00:08:39.205 [2024-10-15 01:09:51.696132] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.205 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.205 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:39.205 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:39.205 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:39.205 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:39.205 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:39.205 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:39.205 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.206 [ 00:08:39.206 { 00:08:39.206 "name": "NewBaseBdev", 00:08:39.206 "aliases": [ 00:08:39.206 "47927af3-0c71-40e9-84ec-32bac0030f9e" 00:08:39.206 ], 00:08:39.206 "product_name": "Malloc disk", 00:08:39.206 "block_size": 512, 00:08:39.206 "num_blocks": 65536, 00:08:39.206 "uuid": "47927af3-0c71-40e9-84ec-32bac0030f9e", 00:08:39.206 "assigned_rate_limits": { 00:08:39.206 "rw_ios_per_sec": 0, 00:08:39.206 "rw_mbytes_per_sec": 0, 00:08:39.206 "r_mbytes_per_sec": 0, 00:08:39.206 "w_mbytes_per_sec": 0 00:08:39.206 }, 00:08:39.206 "claimed": true, 00:08:39.206 "claim_type": "exclusive_write", 00:08:39.206 "zoned": false, 00:08:39.206 "supported_io_types": { 00:08:39.206 "read": true, 00:08:39.206 "write": true, 00:08:39.206 "unmap": true, 00:08:39.206 "flush": true, 00:08:39.206 "reset": true, 00:08:39.206 "nvme_admin": false, 00:08:39.206 "nvme_io": false, 00:08:39.206 "nvme_io_md": false, 00:08:39.206 "write_zeroes": true, 00:08:39.206 "zcopy": true, 00:08:39.206 "get_zone_info": false, 00:08:39.206 "zone_management": false, 00:08:39.206 "zone_append": false, 00:08:39.206 "compare": false, 00:08:39.206 "compare_and_write": false, 00:08:39.206 "abort": true, 00:08:39.206 "seek_hole": false, 00:08:39.206 "seek_data": false, 00:08:39.206 "copy": true, 00:08:39.206 "nvme_iov_md": false 00:08:39.206 }, 00:08:39.206 "memory_domains": [ 00:08:39.206 { 00:08:39.206 "dma_device_id": "system", 00:08:39.206 "dma_device_type": 1 00:08:39.206 }, 00:08:39.206 { 00:08:39.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.206 "dma_device_type": 2 00:08:39.206 } 00:08:39.206 ], 00:08:39.206 "driver_specific": {} 00:08:39.206 } 00:08:39.206 ] 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.206 "name": "Existed_Raid", 00:08:39.206 "uuid": "738bd652-6160-4ee0-8634-51595c318b3d", 00:08:39.206 "strip_size_kb": 64, 00:08:39.206 "state": "online", 00:08:39.206 "raid_level": "concat", 00:08:39.206 "superblock": true, 00:08:39.206 "num_base_bdevs": 3, 00:08:39.206 "num_base_bdevs_discovered": 3, 00:08:39.206 "num_base_bdevs_operational": 3, 00:08:39.206 "base_bdevs_list": [ 00:08:39.206 { 00:08:39.206 "name": "NewBaseBdev", 00:08:39.206 "uuid": "47927af3-0c71-40e9-84ec-32bac0030f9e", 00:08:39.206 "is_configured": true, 00:08:39.206 "data_offset": 2048, 00:08:39.206 "data_size": 63488 00:08:39.206 }, 00:08:39.206 { 00:08:39.206 "name": "BaseBdev2", 00:08:39.206 "uuid": "5d5f5778-ebb6-4823-941f-034ba4f360e8", 00:08:39.206 "is_configured": true, 00:08:39.206 "data_offset": 2048, 00:08:39.206 "data_size": 63488 00:08:39.206 }, 00:08:39.206 { 00:08:39.206 "name": "BaseBdev3", 00:08:39.206 "uuid": "5d00a3a7-d7a1-4dbc-befd-02b8d28efd29", 00:08:39.206 "is_configured": true, 00:08:39.206 "data_offset": 2048, 00:08:39.206 "data_size": 63488 00:08:39.206 } 00:08:39.206 ] 00:08:39.206 }' 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.206 01:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.464 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:39.464 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:39.464 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:39.464 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:39.464 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:39.464 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:39.464 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:39.464 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.464 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.464 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:39.464 [2024-10-15 01:09:52.178829] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.723 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.723 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:39.723 "name": "Existed_Raid", 00:08:39.723 "aliases": [ 00:08:39.723 "738bd652-6160-4ee0-8634-51595c318b3d" 00:08:39.723 ], 00:08:39.723 "product_name": "Raid Volume", 00:08:39.723 "block_size": 512, 00:08:39.723 "num_blocks": 190464, 00:08:39.723 "uuid": "738bd652-6160-4ee0-8634-51595c318b3d", 00:08:39.723 "assigned_rate_limits": { 00:08:39.723 "rw_ios_per_sec": 0, 00:08:39.723 "rw_mbytes_per_sec": 0, 00:08:39.723 "r_mbytes_per_sec": 0, 00:08:39.723 "w_mbytes_per_sec": 0 00:08:39.723 }, 00:08:39.723 "claimed": false, 00:08:39.723 "zoned": false, 00:08:39.723 "supported_io_types": { 00:08:39.723 "read": true, 00:08:39.723 "write": true, 00:08:39.723 "unmap": true, 00:08:39.723 "flush": true, 00:08:39.723 "reset": true, 00:08:39.723 "nvme_admin": false, 00:08:39.723 "nvme_io": false, 00:08:39.723 "nvme_io_md": false, 00:08:39.723 "write_zeroes": true, 00:08:39.723 "zcopy": false, 00:08:39.723 "get_zone_info": false, 00:08:39.723 "zone_management": false, 00:08:39.723 "zone_append": false, 00:08:39.723 "compare": false, 00:08:39.723 "compare_and_write": false, 00:08:39.723 "abort": false, 00:08:39.723 "seek_hole": false, 00:08:39.723 "seek_data": false, 00:08:39.723 "copy": false, 00:08:39.723 "nvme_iov_md": false 00:08:39.723 }, 00:08:39.723 "memory_domains": [ 00:08:39.723 { 00:08:39.723 "dma_device_id": "system", 00:08:39.723 "dma_device_type": 1 00:08:39.723 }, 00:08:39.723 { 00:08:39.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.723 "dma_device_type": 2 00:08:39.723 }, 00:08:39.723 { 00:08:39.723 "dma_device_id": "system", 00:08:39.723 "dma_device_type": 1 00:08:39.723 }, 00:08:39.723 { 00:08:39.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.723 "dma_device_type": 2 00:08:39.723 }, 00:08:39.723 { 00:08:39.723 "dma_device_id": "system", 00:08:39.723 "dma_device_type": 1 00:08:39.723 }, 00:08:39.723 { 00:08:39.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.723 "dma_device_type": 2 00:08:39.723 } 00:08:39.723 ], 00:08:39.723 "driver_specific": { 00:08:39.723 "raid": { 00:08:39.723 "uuid": "738bd652-6160-4ee0-8634-51595c318b3d", 00:08:39.723 "strip_size_kb": 64, 00:08:39.723 "state": "online", 00:08:39.723 "raid_level": "concat", 00:08:39.723 "superblock": true, 00:08:39.723 "num_base_bdevs": 3, 00:08:39.723 "num_base_bdevs_discovered": 3, 00:08:39.723 "num_base_bdevs_operational": 3, 00:08:39.723 "base_bdevs_list": [ 00:08:39.723 { 00:08:39.723 "name": "NewBaseBdev", 00:08:39.723 "uuid": "47927af3-0c71-40e9-84ec-32bac0030f9e", 00:08:39.723 "is_configured": true, 00:08:39.723 "data_offset": 2048, 00:08:39.723 "data_size": 63488 00:08:39.723 }, 00:08:39.723 { 00:08:39.723 "name": "BaseBdev2", 00:08:39.723 "uuid": "5d5f5778-ebb6-4823-941f-034ba4f360e8", 00:08:39.723 "is_configured": true, 00:08:39.723 "data_offset": 2048, 00:08:39.723 "data_size": 63488 00:08:39.723 }, 00:08:39.723 { 00:08:39.723 "name": "BaseBdev3", 00:08:39.723 "uuid": "5d00a3a7-d7a1-4dbc-befd-02b8d28efd29", 00:08:39.723 "is_configured": true, 00:08:39.723 "data_offset": 2048, 00:08:39.723 "data_size": 63488 00:08:39.723 } 00:08:39.723 ] 00:08:39.723 } 00:08:39.723 } 00:08:39.723 }' 00:08:39.723 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:39.724 BaseBdev2 00:08:39.724 BaseBdev3' 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.724 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.982 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.983 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.983 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:39.983 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.983 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.983 [2024-10-15 01:09:52.454073] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:39.983 [2024-10-15 01:09:52.454141] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.983 [2024-10-15 01:09:52.454241] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.983 [2024-10-15 01:09:52.454312] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:39.983 [2024-10-15 01:09:52.454360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:39.983 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.983 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77097 00:08:39.983 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77097 ']' 00:08:39.983 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77097 00:08:39.983 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:39.983 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:39.983 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77097 00:08:39.983 killing process with pid 77097 00:08:39.983 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:39.983 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:39.983 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77097' 00:08:39.983 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77097 00:08:39.983 [2024-10-15 01:09:52.502566] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:39.983 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77097 00:08:39.983 [2024-10-15 01:09:52.534239] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:40.243 01:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:40.243 00:08:40.243 real 0m8.729s 00:08:40.243 user 0m14.953s 00:08:40.243 sys 0m1.744s 00:08:40.243 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.243 ************************************ 00:08:40.243 END TEST raid_state_function_test_sb 00:08:40.243 ************************************ 00:08:40.243 01:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.243 01:09:52 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:40.243 01:09:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:40.243 01:09:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.243 01:09:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:40.243 ************************************ 00:08:40.243 START TEST raid_superblock_test 00:08:40.243 ************************************ 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=77701 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 77701 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 77701 ']' 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.243 01:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.243 [2024-10-15 01:09:52.905517] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:08:40.243 [2024-10-15 01:09:52.905728] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77701 ] 00:08:40.504 [2024-10-15 01:09:53.050296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.504 [2024-10-15 01:09:53.076674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.504 [2024-10-15 01:09:53.119065] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.504 [2024-10-15 01:09:53.119101] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.074 malloc1 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.074 [2024-10-15 01:09:53.745472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:41.074 [2024-10-15 01:09:53.745577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.074 [2024-10-15 01:09:53.745635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:41.074 [2024-10-15 01:09:53.745669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.074 [2024-10-15 01:09:53.747797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.074 [2024-10-15 01:09:53.747871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:41.074 pt1 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.074 malloc2 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.074 [2024-10-15 01:09:53.778122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:41.074 [2024-10-15 01:09:53.778235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.074 [2024-10-15 01:09:53.778285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:41.074 [2024-10-15 01:09:53.778320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.074 [2024-10-15 01:09:53.780415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.074 [2024-10-15 01:09:53.780499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:41.074 pt2 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.074 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.334 malloc3 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.334 [2024-10-15 01:09:53.806797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:41.334 [2024-10-15 01:09:53.806890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.334 [2024-10-15 01:09:53.806926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:41.334 [2024-10-15 01:09:53.806954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.334 [2024-10-15 01:09:53.809083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.334 [2024-10-15 01:09:53.809155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:41.334 pt3 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.334 [2024-10-15 01:09:53.818849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:41.334 [2024-10-15 01:09:53.820751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:41.334 [2024-10-15 01:09:53.820844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:41.334 [2024-10-15 01:09:53.821005] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:41.334 [2024-10-15 01:09:53.821048] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:41.334 [2024-10-15 01:09:53.821337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:41.334 [2024-10-15 01:09:53.821506] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:41.334 [2024-10-15 01:09:53.821551] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:41.334 [2024-10-15 01:09:53.821704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.334 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.334 "name": "raid_bdev1", 00:08:41.334 "uuid": "9a15d4f6-7cb5-438c-a555-dab231a6f7ef", 00:08:41.334 "strip_size_kb": 64, 00:08:41.334 "state": "online", 00:08:41.334 "raid_level": "concat", 00:08:41.334 "superblock": true, 00:08:41.334 "num_base_bdevs": 3, 00:08:41.334 "num_base_bdevs_discovered": 3, 00:08:41.334 "num_base_bdevs_operational": 3, 00:08:41.334 "base_bdevs_list": [ 00:08:41.334 { 00:08:41.334 "name": "pt1", 00:08:41.334 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:41.334 "is_configured": true, 00:08:41.334 "data_offset": 2048, 00:08:41.334 "data_size": 63488 00:08:41.334 }, 00:08:41.334 { 00:08:41.334 "name": "pt2", 00:08:41.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.335 "is_configured": true, 00:08:41.335 "data_offset": 2048, 00:08:41.335 "data_size": 63488 00:08:41.335 }, 00:08:41.335 { 00:08:41.335 "name": "pt3", 00:08:41.335 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:41.335 "is_configured": true, 00:08:41.335 "data_offset": 2048, 00:08:41.335 "data_size": 63488 00:08:41.335 } 00:08:41.335 ] 00:08:41.335 }' 00:08:41.335 01:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.335 01:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.594 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:41.594 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:41.594 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:41.594 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:41.594 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:41.594 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:41.594 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:41.594 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:41.594 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.594 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.594 [2024-10-15 01:09:54.258339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.594 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.594 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:41.594 "name": "raid_bdev1", 00:08:41.594 "aliases": [ 00:08:41.594 "9a15d4f6-7cb5-438c-a555-dab231a6f7ef" 00:08:41.594 ], 00:08:41.594 "product_name": "Raid Volume", 00:08:41.594 "block_size": 512, 00:08:41.594 "num_blocks": 190464, 00:08:41.594 "uuid": "9a15d4f6-7cb5-438c-a555-dab231a6f7ef", 00:08:41.594 "assigned_rate_limits": { 00:08:41.594 "rw_ios_per_sec": 0, 00:08:41.594 "rw_mbytes_per_sec": 0, 00:08:41.594 "r_mbytes_per_sec": 0, 00:08:41.594 "w_mbytes_per_sec": 0 00:08:41.594 }, 00:08:41.594 "claimed": false, 00:08:41.594 "zoned": false, 00:08:41.594 "supported_io_types": { 00:08:41.594 "read": true, 00:08:41.594 "write": true, 00:08:41.594 "unmap": true, 00:08:41.594 "flush": true, 00:08:41.594 "reset": true, 00:08:41.594 "nvme_admin": false, 00:08:41.594 "nvme_io": false, 00:08:41.594 "nvme_io_md": false, 00:08:41.594 "write_zeroes": true, 00:08:41.594 "zcopy": false, 00:08:41.594 "get_zone_info": false, 00:08:41.594 "zone_management": false, 00:08:41.594 "zone_append": false, 00:08:41.594 "compare": false, 00:08:41.594 "compare_and_write": false, 00:08:41.594 "abort": false, 00:08:41.594 "seek_hole": false, 00:08:41.594 "seek_data": false, 00:08:41.594 "copy": false, 00:08:41.594 "nvme_iov_md": false 00:08:41.594 }, 00:08:41.594 "memory_domains": [ 00:08:41.594 { 00:08:41.594 "dma_device_id": "system", 00:08:41.594 "dma_device_type": 1 00:08:41.594 }, 00:08:41.594 { 00:08:41.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.594 "dma_device_type": 2 00:08:41.594 }, 00:08:41.594 { 00:08:41.594 "dma_device_id": "system", 00:08:41.594 "dma_device_type": 1 00:08:41.594 }, 00:08:41.594 { 00:08:41.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.594 "dma_device_type": 2 00:08:41.594 }, 00:08:41.594 { 00:08:41.594 "dma_device_id": "system", 00:08:41.594 "dma_device_type": 1 00:08:41.594 }, 00:08:41.594 { 00:08:41.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.594 "dma_device_type": 2 00:08:41.594 } 00:08:41.594 ], 00:08:41.594 "driver_specific": { 00:08:41.594 "raid": { 00:08:41.594 "uuid": "9a15d4f6-7cb5-438c-a555-dab231a6f7ef", 00:08:41.594 "strip_size_kb": 64, 00:08:41.594 "state": "online", 00:08:41.594 "raid_level": "concat", 00:08:41.594 "superblock": true, 00:08:41.594 "num_base_bdevs": 3, 00:08:41.594 "num_base_bdevs_discovered": 3, 00:08:41.594 "num_base_bdevs_operational": 3, 00:08:41.594 "base_bdevs_list": [ 00:08:41.594 { 00:08:41.594 "name": "pt1", 00:08:41.594 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:41.594 "is_configured": true, 00:08:41.594 "data_offset": 2048, 00:08:41.594 "data_size": 63488 00:08:41.594 }, 00:08:41.594 { 00:08:41.594 "name": "pt2", 00:08:41.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.594 "is_configured": true, 00:08:41.594 "data_offset": 2048, 00:08:41.594 "data_size": 63488 00:08:41.594 }, 00:08:41.594 { 00:08:41.594 "name": "pt3", 00:08:41.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:41.594 "is_configured": true, 00:08:41.594 "data_offset": 2048, 00:08:41.594 "data_size": 63488 00:08:41.594 } 00:08:41.594 ] 00:08:41.594 } 00:08:41.594 } 00:08:41.594 }' 00:08:41.594 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:41.855 pt2 00:08:41.855 pt3' 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.855 [2024-10-15 01:09:54.537768] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9a15d4f6-7cb5-438c-a555-dab231a6f7ef 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9a15d4f6-7cb5-438c-a555-dab231a6f7ef ']' 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.855 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.115 [2024-10-15 01:09:54.581448] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.115 [2024-10-15 01:09:54.581517] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.115 [2024-10-15 01:09:54.581618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.115 [2024-10-15 01:09:54.581705] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.115 [2024-10-15 01:09:54.581758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.115 [2024-10-15 01:09:54.725260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:42.115 [2024-10-15 01:09:54.727209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:42.115 [2024-10-15 01:09:54.727254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:42.115 [2024-10-15 01:09:54.727302] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:42.115 [2024-10-15 01:09:54.727344] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:42.115 [2024-10-15 01:09:54.727375] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:42.115 [2024-10-15 01:09:54.727387] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.115 [2024-10-15 01:09:54.727399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:42.115 request: 00:08:42.115 { 00:08:42.115 "name": "raid_bdev1", 00:08:42.115 "raid_level": "concat", 00:08:42.115 "base_bdevs": [ 00:08:42.115 "malloc1", 00:08:42.115 "malloc2", 00:08:42.115 "malloc3" 00:08:42.115 ], 00:08:42.115 "strip_size_kb": 64, 00:08:42.115 "superblock": false, 00:08:42.115 "method": "bdev_raid_create", 00:08:42.115 "req_id": 1 00:08:42.115 } 00:08:42.115 Got JSON-RPC error response 00:08:42.115 response: 00:08:42.115 { 00:08:42.115 "code": -17, 00:08:42.115 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:42.115 } 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.115 [2024-10-15 01:09:54.793089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:42.115 [2024-10-15 01:09:54.793187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.115 [2024-10-15 01:09:54.793221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:42.115 [2024-10-15 01:09:54.793250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.115 [2024-10-15 01:09:54.795376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.115 [2024-10-15 01:09:54.795444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:42.115 [2024-10-15 01:09:54.795559] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:42.115 [2024-10-15 01:09:54.795626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:42.115 pt1 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.115 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.116 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.116 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.116 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.116 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.116 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.375 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.375 "name": "raid_bdev1", 00:08:42.375 "uuid": "9a15d4f6-7cb5-438c-a555-dab231a6f7ef", 00:08:42.375 "strip_size_kb": 64, 00:08:42.375 "state": "configuring", 00:08:42.375 "raid_level": "concat", 00:08:42.375 "superblock": true, 00:08:42.375 "num_base_bdevs": 3, 00:08:42.375 "num_base_bdevs_discovered": 1, 00:08:42.375 "num_base_bdevs_operational": 3, 00:08:42.375 "base_bdevs_list": [ 00:08:42.375 { 00:08:42.375 "name": "pt1", 00:08:42.375 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.375 "is_configured": true, 00:08:42.375 "data_offset": 2048, 00:08:42.375 "data_size": 63488 00:08:42.375 }, 00:08:42.375 { 00:08:42.375 "name": null, 00:08:42.375 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.375 "is_configured": false, 00:08:42.375 "data_offset": 2048, 00:08:42.375 "data_size": 63488 00:08:42.375 }, 00:08:42.375 { 00:08:42.375 "name": null, 00:08:42.375 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:42.375 "is_configured": false, 00:08:42.375 "data_offset": 2048, 00:08:42.375 "data_size": 63488 00:08:42.375 } 00:08:42.375 ] 00:08:42.375 }' 00:08:42.375 01:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.375 01:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.635 [2024-10-15 01:09:55.216412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:42.635 [2024-10-15 01:09:55.216579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.635 [2024-10-15 01:09:55.216609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:42.635 [2024-10-15 01:09:55.216624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.635 [2024-10-15 01:09:55.217087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.635 [2024-10-15 01:09:55.217114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:42.635 [2024-10-15 01:09:55.217209] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:42.635 [2024-10-15 01:09:55.217237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:42.635 pt2 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.635 [2024-10-15 01:09:55.224379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.635 "name": "raid_bdev1", 00:08:42.635 "uuid": "9a15d4f6-7cb5-438c-a555-dab231a6f7ef", 00:08:42.635 "strip_size_kb": 64, 00:08:42.635 "state": "configuring", 00:08:42.635 "raid_level": "concat", 00:08:42.635 "superblock": true, 00:08:42.635 "num_base_bdevs": 3, 00:08:42.635 "num_base_bdevs_discovered": 1, 00:08:42.635 "num_base_bdevs_operational": 3, 00:08:42.635 "base_bdevs_list": [ 00:08:42.635 { 00:08:42.635 "name": "pt1", 00:08:42.635 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.635 "is_configured": true, 00:08:42.635 "data_offset": 2048, 00:08:42.635 "data_size": 63488 00:08:42.635 }, 00:08:42.635 { 00:08:42.635 "name": null, 00:08:42.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.635 "is_configured": false, 00:08:42.635 "data_offset": 0, 00:08:42.635 "data_size": 63488 00:08:42.635 }, 00:08:42.635 { 00:08:42.635 "name": null, 00:08:42.635 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:42.635 "is_configured": false, 00:08:42.635 "data_offset": 2048, 00:08:42.635 "data_size": 63488 00:08:42.635 } 00:08:42.635 ] 00:08:42.635 }' 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.635 01:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.205 [2024-10-15 01:09:55.651751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:43.205 [2024-10-15 01:09:55.651888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.205 [2024-10-15 01:09:55.651935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:43.205 [2024-10-15 01:09:55.651970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.205 [2024-10-15 01:09:55.652437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.205 [2024-10-15 01:09:55.652502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:43.205 [2024-10-15 01:09:55.652615] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:43.205 [2024-10-15 01:09:55.652668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:43.205 pt2 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.205 [2024-10-15 01:09:55.663707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:43.205 [2024-10-15 01:09:55.663787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.205 [2024-10-15 01:09:55.663821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:43.205 [2024-10-15 01:09:55.663847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.205 [2024-10-15 01:09:55.664232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.205 [2024-10-15 01:09:55.664288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:43.205 [2024-10-15 01:09:55.664380] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:43.205 [2024-10-15 01:09:55.664429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:43.205 [2024-10-15 01:09:55.664565] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:43.205 [2024-10-15 01:09:55.664610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:43.205 [2024-10-15 01:09:55.664863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:43.205 [2024-10-15 01:09:55.665000] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:43.205 [2024-10-15 01:09:55.665039] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:43.205 [2024-10-15 01:09:55.665173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.205 pt3 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.205 "name": "raid_bdev1", 00:08:43.205 "uuid": "9a15d4f6-7cb5-438c-a555-dab231a6f7ef", 00:08:43.205 "strip_size_kb": 64, 00:08:43.205 "state": "online", 00:08:43.205 "raid_level": "concat", 00:08:43.205 "superblock": true, 00:08:43.205 "num_base_bdevs": 3, 00:08:43.205 "num_base_bdevs_discovered": 3, 00:08:43.205 "num_base_bdevs_operational": 3, 00:08:43.205 "base_bdevs_list": [ 00:08:43.205 { 00:08:43.205 "name": "pt1", 00:08:43.205 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.205 "is_configured": true, 00:08:43.205 "data_offset": 2048, 00:08:43.205 "data_size": 63488 00:08:43.205 }, 00:08:43.205 { 00:08:43.205 "name": "pt2", 00:08:43.205 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.205 "is_configured": true, 00:08:43.205 "data_offset": 2048, 00:08:43.205 "data_size": 63488 00:08:43.205 }, 00:08:43.205 { 00:08:43.205 "name": "pt3", 00:08:43.205 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.205 "is_configured": true, 00:08:43.205 "data_offset": 2048, 00:08:43.205 "data_size": 63488 00:08:43.205 } 00:08:43.205 ] 00:08:43.205 }' 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.205 01:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.466 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:43.466 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:43.466 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:43.466 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:43.466 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:43.466 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:43.466 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:43.466 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.466 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.466 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:43.466 [2024-10-15 01:09:56.111369] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.466 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.466 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:43.466 "name": "raid_bdev1", 00:08:43.466 "aliases": [ 00:08:43.466 "9a15d4f6-7cb5-438c-a555-dab231a6f7ef" 00:08:43.466 ], 00:08:43.466 "product_name": "Raid Volume", 00:08:43.466 "block_size": 512, 00:08:43.466 "num_blocks": 190464, 00:08:43.466 "uuid": "9a15d4f6-7cb5-438c-a555-dab231a6f7ef", 00:08:43.466 "assigned_rate_limits": { 00:08:43.466 "rw_ios_per_sec": 0, 00:08:43.466 "rw_mbytes_per_sec": 0, 00:08:43.466 "r_mbytes_per_sec": 0, 00:08:43.466 "w_mbytes_per_sec": 0 00:08:43.466 }, 00:08:43.466 "claimed": false, 00:08:43.466 "zoned": false, 00:08:43.466 "supported_io_types": { 00:08:43.466 "read": true, 00:08:43.466 "write": true, 00:08:43.466 "unmap": true, 00:08:43.466 "flush": true, 00:08:43.466 "reset": true, 00:08:43.466 "nvme_admin": false, 00:08:43.466 "nvme_io": false, 00:08:43.466 "nvme_io_md": false, 00:08:43.466 "write_zeroes": true, 00:08:43.466 "zcopy": false, 00:08:43.466 "get_zone_info": false, 00:08:43.466 "zone_management": false, 00:08:43.466 "zone_append": false, 00:08:43.466 "compare": false, 00:08:43.466 "compare_and_write": false, 00:08:43.466 "abort": false, 00:08:43.466 "seek_hole": false, 00:08:43.466 "seek_data": false, 00:08:43.466 "copy": false, 00:08:43.466 "nvme_iov_md": false 00:08:43.466 }, 00:08:43.466 "memory_domains": [ 00:08:43.466 { 00:08:43.466 "dma_device_id": "system", 00:08:43.466 "dma_device_type": 1 00:08:43.466 }, 00:08:43.466 { 00:08:43.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.466 "dma_device_type": 2 00:08:43.466 }, 00:08:43.466 { 00:08:43.466 "dma_device_id": "system", 00:08:43.466 "dma_device_type": 1 00:08:43.466 }, 00:08:43.466 { 00:08:43.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.466 "dma_device_type": 2 00:08:43.466 }, 00:08:43.466 { 00:08:43.466 "dma_device_id": "system", 00:08:43.466 "dma_device_type": 1 00:08:43.466 }, 00:08:43.466 { 00:08:43.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.466 "dma_device_type": 2 00:08:43.466 } 00:08:43.466 ], 00:08:43.466 "driver_specific": { 00:08:43.466 "raid": { 00:08:43.466 "uuid": "9a15d4f6-7cb5-438c-a555-dab231a6f7ef", 00:08:43.466 "strip_size_kb": 64, 00:08:43.466 "state": "online", 00:08:43.466 "raid_level": "concat", 00:08:43.466 "superblock": true, 00:08:43.466 "num_base_bdevs": 3, 00:08:43.466 "num_base_bdevs_discovered": 3, 00:08:43.466 "num_base_bdevs_operational": 3, 00:08:43.466 "base_bdevs_list": [ 00:08:43.466 { 00:08:43.467 "name": "pt1", 00:08:43.467 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.467 "is_configured": true, 00:08:43.467 "data_offset": 2048, 00:08:43.467 "data_size": 63488 00:08:43.467 }, 00:08:43.467 { 00:08:43.467 "name": "pt2", 00:08:43.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.467 "is_configured": true, 00:08:43.467 "data_offset": 2048, 00:08:43.467 "data_size": 63488 00:08:43.467 }, 00:08:43.467 { 00:08:43.467 "name": "pt3", 00:08:43.467 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.467 "is_configured": true, 00:08:43.467 "data_offset": 2048, 00:08:43.467 "data_size": 63488 00:08:43.467 } 00:08:43.467 ] 00:08:43.467 } 00:08:43.467 } 00:08:43.467 }' 00:08:43.467 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:43.726 pt2 00:08:43.726 pt3' 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.726 [2024-10-15 01:09:56.406838] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9a15d4f6-7cb5-438c-a555-dab231a6f7ef '!=' 9a15d4f6-7cb5-438c-a555-dab231a6f7ef ']' 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 77701 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 77701 ']' 00:08:43.726 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 77701 00:08:43.987 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:43.987 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:43.987 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77701 00:08:43.987 killing process with pid 77701 00:08:43.987 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:43.987 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:43.987 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77701' 00:08:43.987 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 77701 00:08:43.987 [2024-10-15 01:09:56.492272] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:43.987 [2024-10-15 01:09:56.492368] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.987 [2024-10-15 01:09:56.492434] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.987 [2024-10-15 01:09:56.492444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:43.987 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 77701 00:08:43.987 [2024-10-15 01:09:56.526444] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:44.247 01:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:44.247 ************************************ 00:08:44.247 END TEST raid_superblock_test 00:08:44.247 ************************************ 00:08:44.247 00:08:44.247 real 0m3.906s 00:08:44.247 user 0m6.220s 00:08:44.247 sys 0m0.834s 00:08:44.247 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.247 01:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.247 01:09:56 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:08:44.247 01:09:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:44.247 01:09:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.247 01:09:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:44.247 ************************************ 00:08:44.247 START TEST raid_read_error_test 00:08:44.247 ************************************ 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DPfe5OLxMW 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=77943 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 77943 00:08:44.247 01:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 77943 ']' 00:08:44.248 01:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.248 01:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:44.248 01:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.248 01:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:44.248 01:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.248 [2024-10-15 01:09:56.898659] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:08:44.248 [2024-10-15 01:09:56.898857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77943 ] 00:08:44.507 [2024-10-15 01:09:57.045914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.507 [2024-10-15 01:09:57.073570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.507 [2024-10-15 01:09:57.116338] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.507 [2024-10-15 01:09:57.116473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.076 BaseBdev1_malloc 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.076 true 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.076 [2024-10-15 01:09:57.754803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:45.076 [2024-10-15 01:09:57.754859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.076 [2024-10-15 01:09:57.754880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:45.076 [2024-10-15 01:09:57.754888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.076 [2024-10-15 01:09:57.757043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.076 [2024-10-15 01:09:57.757083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:45.076 BaseBdev1 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.076 BaseBdev2_malloc 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.076 true 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.076 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.076 [2024-10-15 01:09:57.795249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:45.076 [2024-10-15 01:09:57.795336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.076 [2024-10-15 01:09:57.795374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:45.076 [2024-10-15 01:09:57.795392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.076 [2024-10-15 01:09:57.797511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.076 [2024-10-15 01:09:57.797543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:45.342 BaseBdev2 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.342 BaseBdev3_malloc 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.342 true 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.342 [2024-10-15 01:09:57.835843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:45.342 [2024-10-15 01:09:57.835896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.342 [2024-10-15 01:09:57.835920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:45.342 [2024-10-15 01:09:57.835928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.342 [2024-10-15 01:09:57.838028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.342 [2024-10-15 01:09:57.838101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:45.342 BaseBdev3 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.342 [2024-10-15 01:09:57.847904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:45.342 [2024-10-15 01:09:57.849763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:45.342 [2024-10-15 01:09:57.849838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:45.342 [2024-10-15 01:09:57.850020] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:45.342 [2024-10-15 01:09:57.850034] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:45.342 [2024-10-15 01:09:57.850324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:45.342 [2024-10-15 01:09:57.850462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:45.342 [2024-10-15 01:09:57.850472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:45.342 [2024-10-15 01:09:57.850625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.342 "name": "raid_bdev1", 00:08:45.342 "uuid": "ad624de0-b096-4194-acfe-24a71acc9ad9", 00:08:45.342 "strip_size_kb": 64, 00:08:45.342 "state": "online", 00:08:45.342 "raid_level": "concat", 00:08:45.342 "superblock": true, 00:08:45.342 "num_base_bdevs": 3, 00:08:45.342 "num_base_bdevs_discovered": 3, 00:08:45.342 "num_base_bdevs_operational": 3, 00:08:45.342 "base_bdevs_list": [ 00:08:45.342 { 00:08:45.342 "name": "BaseBdev1", 00:08:45.342 "uuid": "3d292889-0566-5b0c-ba6d-0c8961bc6514", 00:08:45.342 "is_configured": true, 00:08:45.342 "data_offset": 2048, 00:08:45.342 "data_size": 63488 00:08:45.342 }, 00:08:45.342 { 00:08:45.342 "name": "BaseBdev2", 00:08:45.342 "uuid": "99bf7164-0304-538d-9f3a-ccae3e1b8a34", 00:08:45.342 "is_configured": true, 00:08:45.342 "data_offset": 2048, 00:08:45.342 "data_size": 63488 00:08:45.342 }, 00:08:45.342 { 00:08:45.342 "name": "BaseBdev3", 00:08:45.342 "uuid": "74ddee72-1e12-5888-9f45-e570a36543e2", 00:08:45.342 "is_configured": true, 00:08:45.342 "data_offset": 2048, 00:08:45.342 "data_size": 63488 00:08:45.342 } 00:08:45.342 ] 00:08:45.342 }' 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.342 01:09:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.611 01:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:45.611 01:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:45.870 [2024-10-15 01:09:58.367414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.809 "name": "raid_bdev1", 00:08:46.809 "uuid": "ad624de0-b096-4194-acfe-24a71acc9ad9", 00:08:46.809 "strip_size_kb": 64, 00:08:46.809 "state": "online", 00:08:46.809 "raid_level": "concat", 00:08:46.809 "superblock": true, 00:08:46.809 "num_base_bdevs": 3, 00:08:46.809 "num_base_bdevs_discovered": 3, 00:08:46.809 "num_base_bdevs_operational": 3, 00:08:46.809 "base_bdevs_list": [ 00:08:46.809 { 00:08:46.809 "name": "BaseBdev1", 00:08:46.809 "uuid": "3d292889-0566-5b0c-ba6d-0c8961bc6514", 00:08:46.809 "is_configured": true, 00:08:46.809 "data_offset": 2048, 00:08:46.809 "data_size": 63488 00:08:46.809 }, 00:08:46.809 { 00:08:46.809 "name": "BaseBdev2", 00:08:46.809 "uuid": "99bf7164-0304-538d-9f3a-ccae3e1b8a34", 00:08:46.809 "is_configured": true, 00:08:46.809 "data_offset": 2048, 00:08:46.809 "data_size": 63488 00:08:46.809 }, 00:08:46.809 { 00:08:46.809 "name": "BaseBdev3", 00:08:46.809 "uuid": "74ddee72-1e12-5888-9f45-e570a36543e2", 00:08:46.809 "is_configured": true, 00:08:46.809 "data_offset": 2048, 00:08:46.809 "data_size": 63488 00:08:46.809 } 00:08:46.809 ] 00:08:46.809 }' 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.809 01:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.069 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:47.069 01:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.069 01:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.069 [2024-10-15 01:09:59.718845] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:47.069 [2024-10-15 01:09:59.718878] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.069 [2024-10-15 01:09:59.721409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.069 [2024-10-15 01:09:59.721471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.069 [2024-10-15 01:09:59.721507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.069 [2024-10-15 01:09:59.721518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:47.069 { 00:08:47.069 "results": [ 00:08:47.069 { 00:08:47.069 "job": "raid_bdev1", 00:08:47.069 "core_mask": "0x1", 00:08:47.069 "workload": "randrw", 00:08:47.069 "percentage": 50, 00:08:47.069 "status": "finished", 00:08:47.069 "queue_depth": 1, 00:08:47.069 "io_size": 131072, 00:08:47.069 "runtime": 1.352086, 00:08:47.069 "iops": 16799.227268087976, 00:08:47.069 "mibps": 2099.903408510997, 00:08:47.069 "io_failed": 1, 00:08:47.069 "io_timeout": 0, 00:08:47.069 "avg_latency_us": 82.48603637055713, 00:08:47.069 "min_latency_us": 24.929257641921396, 00:08:47.069 "max_latency_us": 1409.4532751091704 00:08:47.069 } 00:08:47.069 ], 00:08:47.069 "core_count": 1 00:08:47.069 } 00:08:47.069 01:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.069 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 77943 00:08:47.069 01:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 77943 ']' 00:08:47.069 01:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 77943 00:08:47.069 01:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:47.069 01:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:47.069 01:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77943 00:08:47.069 01:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:47.069 01:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:47.069 killing process with pid 77943 00:08:47.069 01:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77943' 00:08:47.069 01:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 77943 00:08:47.069 [2024-10-15 01:09:59.769639] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:47.069 01:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 77943 00:08:47.329 [2024-10-15 01:09:59.796217] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.329 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DPfe5OLxMW 00:08:47.329 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:47.329 01:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:47.329 01:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:47.329 01:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:47.329 01:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:47.329 01:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:47.329 01:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:47.329 ************************************ 00:08:47.329 END TEST raid_read_error_test 00:08:47.329 ************************************ 00:08:47.329 00:08:47.329 real 0m3.210s 00:08:47.329 user 0m4.077s 00:08:47.329 sys 0m0.523s 00:08:47.329 01:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.329 01:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.589 01:10:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:08:47.589 01:10:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:47.589 01:10:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.589 01:10:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.589 ************************************ 00:08:47.589 START TEST raid_write_error_test 00:08:47.589 ************************************ 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.N9FxSR8BEt 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78072 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78072 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 78072 ']' 00:08:47.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.589 01:10:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.589 [2024-10-15 01:10:00.179394] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:08:47.589 [2024-10-15 01:10:00.179530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78072 ] 00:08:47.849 [2024-10-15 01:10:00.324876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.849 [2024-10-15 01:10:00.352054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.849 [2024-10-15 01:10:00.394984] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.849 [2024-10-15 01:10:00.395018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.418 BaseBdev1_malloc 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.418 true 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.418 [2024-10-15 01:10:01.029676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:48.418 [2024-10-15 01:10:01.029732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.418 [2024-10-15 01:10:01.029776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:48.418 [2024-10-15 01:10:01.029785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.418 [2024-10-15 01:10:01.031904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.418 [2024-10-15 01:10:01.031994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:48.418 BaseBdev1 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.418 BaseBdev2_malloc 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.418 true 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.418 [2024-10-15 01:10:01.066305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:48.418 [2024-10-15 01:10:01.066401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.418 [2024-10-15 01:10:01.066424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:48.418 [2024-10-15 01:10:01.066441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.418 [2024-10-15 01:10:01.068587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.418 [2024-10-15 01:10:01.068622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:48.418 BaseBdev2 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.418 BaseBdev3_malloc 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.418 true 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.418 [2024-10-15 01:10:01.107031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:48.418 [2024-10-15 01:10:01.107086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.418 [2024-10-15 01:10:01.107107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:48.418 [2024-10-15 01:10:01.107116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.418 [2024-10-15 01:10:01.109225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.418 [2024-10-15 01:10:01.109297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:48.418 BaseBdev3 00:08:48.418 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.419 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:48.419 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.419 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.419 [2024-10-15 01:10:01.119088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.419 [2024-10-15 01:10:01.120975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.419 [2024-10-15 01:10:01.121092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.419 [2024-10-15 01:10:01.121281] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:48.419 [2024-10-15 01:10:01.121297] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:48.419 [2024-10-15 01:10:01.121567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:48.419 [2024-10-15 01:10:01.121703] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:48.419 [2024-10-15 01:10:01.121713] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:48.419 [2024-10-15 01:10:01.121836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.419 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.419 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:48.419 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.419 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.419 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.419 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.419 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.419 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.419 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.419 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.419 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.419 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.419 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.419 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.419 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.678 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.678 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.678 "name": "raid_bdev1", 00:08:48.678 "uuid": "5744e5e9-88f0-45f9-9180-50f25d74e2c6", 00:08:48.678 "strip_size_kb": 64, 00:08:48.678 "state": "online", 00:08:48.678 "raid_level": "concat", 00:08:48.678 "superblock": true, 00:08:48.678 "num_base_bdevs": 3, 00:08:48.678 "num_base_bdevs_discovered": 3, 00:08:48.678 "num_base_bdevs_operational": 3, 00:08:48.678 "base_bdevs_list": [ 00:08:48.678 { 00:08:48.678 "name": "BaseBdev1", 00:08:48.678 "uuid": "b2d9aecb-3e7b-5580-b9b1-f5ff0dba6319", 00:08:48.678 "is_configured": true, 00:08:48.678 "data_offset": 2048, 00:08:48.678 "data_size": 63488 00:08:48.678 }, 00:08:48.678 { 00:08:48.678 "name": "BaseBdev2", 00:08:48.678 "uuid": "12555a17-81a0-550c-9715-1633165c909a", 00:08:48.678 "is_configured": true, 00:08:48.678 "data_offset": 2048, 00:08:48.678 "data_size": 63488 00:08:48.678 }, 00:08:48.678 { 00:08:48.678 "name": "BaseBdev3", 00:08:48.678 "uuid": "f75fe1eb-b1fd-5b62-9ea9-af10c330effe", 00:08:48.678 "is_configured": true, 00:08:48.678 "data_offset": 2048, 00:08:48.678 "data_size": 63488 00:08:48.678 } 00:08:48.678 ] 00:08:48.678 }' 00:08:48.678 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.678 01:10:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.937 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:48.937 01:10:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:48.937 [2024-10-15 01:10:01.638596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:49.875 01:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:49.875 01:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.875 01:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.875 01:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.875 01:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:49.875 01:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:49.875 01:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:49.875 01:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:49.875 01:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.875 01:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.875 01:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.875 01:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.875 01:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.875 01:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.875 01:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.875 01:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.875 01:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.875 01:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.875 01:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.875 01:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.875 01:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.135 01:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.135 01:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.135 "name": "raid_bdev1", 00:08:50.135 "uuid": "5744e5e9-88f0-45f9-9180-50f25d74e2c6", 00:08:50.135 "strip_size_kb": 64, 00:08:50.135 "state": "online", 00:08:50.135 "raid_level": "concat", 00:08:50.135 "superblock": true, 00:08:50.135 "num_base_bdevs": 3, 00:08:50.135 "num_base_bdevs_discovered": 3, 00:08:50.135 "num_base_bdevs_operational": 3, 00:08:50.135 "base_bdevs_list": [ 00:08:50.135 { 00:08:50.135 "name": "BaseBdev1", 00:08:50.135 "uuid": "b2d9aecb-3e7b-5580-b9b1-f5ff0dba6319", 00:08:50.135 "is_configured": true, 00:08:50.135 "data_offset": 2048, 00:08:50.135 "data_size": 63488 00:08:50.135 }, 00:08:50.135 { 00:08:50.136 "name": "BaseBdev2", 00:08:50.136 "uuid": "12555a17-81a0-550c-9715-1633165c909a", 00:08:50.136 "is_configured": true, 00:08:50.136 "data_offset": 2048, 00:08:50.136 "data_size": 63488 00:08:50.136 }, 00:08:50.136 { 00:08:50.136 "name": "BaseBdev3", 00:08:50.136 "uuid": "f75fe1eb-b1fd-5b62-9ea9-af10c330effe", 00:08:50.136 "is_configured": true, 00:08:50.136 "data_offset": 2048, 00:08:50.136 "data_size": 63488 00:08:50.136 } 00:08:50.136 ] 00:08:50.136 }' 00:08:50.136 01:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.136 01:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.396 01:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:50.396 01:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.396 01:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.396 [2024-10-15 01:10:03.009154] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:50.396 [2024-10-15 01:10:03.009188] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.396 [2024-10-15 01:10:03.011758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.396 [2024-10-15 01:10:03.011802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.396 [2024-10-15 01:10:03.011837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.396 [2024-10-15 01:10:03.011848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:50.396 { 00:08:50.396 "results": [ 00:08:50.396 { 00:08:50.396 "job": "raid_bdev1", 00:08:50.396 "core_mask": "0x1", 00:08:50.396 "workload": "randrw", 00:08:50.396 "percentage": 50, 00:08:50.396 "status": "finished", 00:08:50.396 "queue_depth": 1, 00:08:50.396 "io_size": 131072, 00:08:50.396 "runtime": 1.371182, 00:08:50.396 "iops": 16867.92854631989, 00:08:50.396 "mibps": 2108.4910682899863, 00:08:50.396 "io_failed": 1, 00:08:50.396 "io_timeout": 0, 00:08:50.396 "avg_latency_us": 82.09764667901382, 00:08:50.396 "min_latency_us": 24.593886462882097, 00:08:50.396 "max_latency_us": 1760.0279475982534 00:08:50.396 } 00:08:50.396 ], 00:08:50.396 "core_count": 1 00:08:50.396 } 00:08:50.396 01:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.396 01:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78072 00:08:50.396 01:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 78072 ']' 00:08:50.396 01:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 78072 00:08:50.396 01:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:50.396 01:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.396 01:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78072 00:08:50.396 01:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:50.396 01:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:50.396 01:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78072' 00:08:50.396 killing process with pid 78072 00:08:50.396 01:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 78072 00:08:50.396 [2024-10-15 01:10:03.047248] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.396 01:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 78072 00:08:50.396 [2024-10-15 01:10:03.073502] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.656 01:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.N9FxSR8BEt 00:08:50.657 01:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:50.657 01:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:50.657 01:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:50.657 01:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:50.657 01:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:50.657 01:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:50.657 01:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:50.657 ************************************ 00:08:50.657 END TEST raid_write_error_test 00:08:50.657 ************************************ 00:08:50.657 00:08:50.657 real 0m3.206s 00:08:50.657 user 0m4.072s 00:08:50.657 sys 0m0.492s 00:08:50.657 01:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.657 01:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.657 01:10:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:50.657 01:10:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:08:50.657 01:10:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:50.657 01:10:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.657 01:10:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:50.657 ************************************ 00:08:50.657 START TEST raid_state_function_test 00:08:50.657 ************************************ 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78199 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78199' 00:08:50.657 Process raid pid: 78199 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78199 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 78199 ']' 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:50.657 01:10:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.916 [2024-10-15 01:10:03.448669] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:08:50.916 [2024-10-15 01:10:03.448801] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.916 [2024-10-15 01:10:03.592967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.916 [2024-10-15 01:10:03.621380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.176 [2024-10-15 01:10:03.664540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.176 [2024-10-15 01:10:03.664575] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.745 [2024-10-15 01:10:04.282473] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:51.745 [2024-10-15 01:10:04.282533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:51.745 [2024-10-15 01:10:04.282552] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:51.745 [2024-10-15 01:10:04.282563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:51.745 [2024-10-15 01:10:04.282569] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:51.745 [2024-10-15 01:10:04.282580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.745 "name": "Existed_Raid", 00:08:51.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.745 "strip_size_kb": 0, 00:08:51.745 "state": "configuring", 00:08:51.745 "raid_level": "raid1", 00:08:51.745 "superblock": false, 00:08:51.745 "num_base_bdevs": 3, 00:08:51.745 "num_base_bdevs_discovered": 0, 00:08:51.745 "num_base_bdevs_operational": 3, 00:08:51.745 "base_bdevs_list": [ 00:08:51.745 { 00:08:51.745 "name": "BaseBdev1", 00:08:51.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.745 "is_configured": false, 00:08:51.745 "data_offset": 0, 00:08:51.745 "data_size": 0 00:08:51.745 }, 00:08:51.745 { 00:08:51.745 "name": "BaseBdev2", 00:08:51.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.745 "is_configured": false, 00:08:51.745 "data_offset": 0, 00:08:51.745 "data_size": 0 00:08:51.745 }, 00:08:51.745 { 00:08:51.745 "name": "BaseBdev3", 00:08:51.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.745 "is_configured": false, 00:08:51.745 "data_offset": 0, 00:08:51.745 "data_size": 0 00:08:51.745 } 00:08:51.745 ] 00:08:51.745 }' 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.745 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.005 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:52.005 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.005 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.005 [2024-10-15 01:10:04.713692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:52.005 [2024-10-15 01:10:04.713774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:52.005 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.005 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:52.005 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.005 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.005 [2024-10-15 01:10:04.721635] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:52.005 [2024-10-15 01:10:04.721709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:52.005 [2024-10-15 01:10:04.721736] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:52.005 [2024-10-15 01:10:04.721758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:52.005 [2024-10-15 01:10:04.721775] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:52.005 [2024-10-15 01:10:04.721795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:52.005 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.005 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:52.005 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.005 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.266 [2024-10-15 01:10:04.738726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:52.266 BaseBdev1 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.266 [ 00:08:52.266 { 00:08:52.266 "name": "BaseBdev1", 00:08:52.266 "aliases": [ 00:08:52.266 "f79c785a-011d-48dd-b2cb-b0b43d07cc3b" 00:08:52.266 ], 00:08:52.266 "product_name": "Malloc disk", 00:08:52.266 "block_size": 512, 00:08:52.266 "num_blocks": 65536, 00:08:52.266 "uuid": "f79c785a-011d-48dd-b2cb-b0b43d07cc3b", 00:08:52.266 "assigned_rate_limits": { 00:08:52.266 "rw_ios_per_sec": 0, 00:08:52.266 "rw_mbytes_per_sec": 0, 00:08:52.266 "r_mbytes_per_sec": 0, 00:08:52.266 "w_mbytes_per_sec": 0 00:08:52.266 }, 00:08:52.266 "claimed": true, 00:08:52.266 "claim_type": "exclusive_write", 00:08:52.266 "zoned": false, 00:08:52.266 "supported_io_types": { 00:08:52.266 "read": true, 00:08:52.266 "write": true, 00:08:52.266 "unmap": true, 00:08:52.266 "flush": true, 00:08:52.266 "reset": true, 00:08:52.266 "nvme_admin": false, 00:08:52.266 "nvme_io": false, 00:08:52.266 "nvme_io_md": false, 00:08:52.266 "write_zeroes": true, 00:08:52.266 "zcopy": true, 00:08:52.266 "get_zone_info": false, 00:08:52.266 "zone_management": false, 00:08:52.266 "zone_append": false, 00:08:52.266 "compare": false, 00:08:52.266 "compare_and_write": false, 00:08:52.266 "abort": true, 00:08:52.266 "seek_hole": false, 00:08:52.266 "seek_data": false, 00:08:52.266 "copy": true, 00:08:52.266 "nvme_iov_md": false 00:08:52.266 }, 00:08:52.266 "memory_domains": [ 00:08:52.266 { 00:08:52.266 "dma_device_id": "system", 00:08:52.266 "dma_device_type": 1 00:08:52.266 }, 00:08:52.266 { 00:08:52.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.266 "dma_device_type": 2 00:08:52.266 } 00:08:52.266 ], 00:08:52.266 "driver_specific": {} 00:08:52.266 } 00:08:52.266 ] 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.266 "name": "Existed_Raid", 00:08:52.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.266 "strip_size_kb": 0, 00:08:52.266 "state": "configuring", 00:08:52.266 "raid_level": "raid1", 00:08:52.266 "superblock": false, 00:08:52.266 "num_base_bdevs": 3, 00:08:52.266 "num_base_bdevs_discovered": 1, 00:08:52.266 "num_base_bdevs_operational": 3, 00:08:52.266 "base_bdevs_list": [ 00:08:52.266 { 00:08:52.266 "name": "BaseBdev1", 00:08:52.266 "uuid": "f79c785a-011d-48dd-b2cb-b0b43d07cc3b", 00:08:52.266 "is_configured": true, 00:08:52.266 "data_offset": 0, 00:08:52.266 "data_size": 65536 00:08:52.266 }, 00:08:52.266 { 00:08:52.266 "name": "BaseBdev2", 00:08:52.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.266 "is_configured": false, 00:08:52.266 "data_offset": 0, 00:08:52.266 "data_size": 0 00:08:52.266 }, 00:08:52.266 { 00:08:52.266 "name": "BaseBdev3", 00:08:52.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.266 "is_configured": false, 00:08:52.266 "data_offset": 0, 00:08:52.266 "data_size": 0 00:08:52.266 } 00:08:52.266 ] 00:08:52.266 }' 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.266 01:10:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.526 [2024-10-15 01:10:05.205996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:52.526 [2024-10-15 01:10:05.206109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.526 [2024-10-15 01:10:05.218008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:52.526 [2024-10-15 01:10:05.219874] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:52.526 [2024-10-15 01:10:05.219963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:52.526 [2024-10-15 01:10:05.219991] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:52.526 [2024-10-15 01:10:05.220014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.526 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.786 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.786 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.786 "name": "Existed_Raid", 00:08:52.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.786 "strip_size_kb": 0, 00:08:52.786 "state": "configuring", 00:08:52.786 "raid_level": "raid1", 00:08:52.786 "superblock": false, 00:08:52.786 "num_base_bdevs": 3, 00:08:52.786 "num_base_bdevs_discovered": 1, 00:08:52.786 "num_base_bdevs_operational": 3, 00:08:52.786 "base_bdevs_list": [ 00:08:52.786 { 00:08:52.786 "name": "BaseBdev1", 00:08:52.786 "uuid": "f79c785a-011d-48dd-b2cb-b0b43d07cc3b", 00:08:52.786 "is_configured": true, 00:08:52.786 "data_offset": 0, 00:08:52.786 "data_size": 65536 00:08:52.786 }, 00:08:52.786 { 00:08:52.786 "name": "BaseBdev2", 00:08:52.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.786 "is_configured": false, 00:08:52.786 "data_offset": 0, 00:08:52.786 "data_size": 0 00:08:52.786 }, 00:08:52.786 { 00:08:52.786 "name": "BaseBdev3", 00:08:52.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.786 "is_configured": false, 00:08:52.786 "data_offset": 0, 00:08:52.786 "data_size": 0 00:08:52.786 } 00:08:52.786 ] 00:08:52.786 }' 00:08:52.786 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.786 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.045 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:53.045 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.045 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.045 [2024-10-15 01:10:05.704212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.045 BaseBdev2 00:08:53.045 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.045 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:53.045 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:53.045 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:53.045 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:53.045 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:53.045 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:53.045 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:53.045 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.045 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.045 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.045 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:53.045 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.045 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.045 [ 00:08:53.045 { 00:08:53.045 "name": "BaseBdev2", 00:08:53.045 "aliases": [ 00:08:53.045 "7fc042ee-caca-43a2-b4b1-c198c7447165" 00:08:53.045 ], 00:08:53.045 "product_name": "Malloc disk", 00:08:53.045 "block_size": 512, 00:08:53.045 "num_blocks": 65536, 00:08:53.045 "uuid": "7fc042ee-caca-43a2-b4b1-c198c7447165", 00:08:53.045 "assigned_rate_limits": { 00:08:53.045 "rw_ios_per_sec": 0, 00:08:53.045 "rw_mbytes_per_sec": 0, 00:08:53.045 "r_mbytes_per_sec": 0, 00:08:53.045 "w_mbytes_per_sec": 0 00:08:53.045 }, 00:08:53.045 "claimed": true, 00:08:53.045 "claim_type": "exclusive_write", 00:08:53.045 "zoned": false, 00:08:53.045 "supported_io_types": { 00:08:53.045 "read": true, 00:08:53.045 "write": true, 00:08:53.045 "unmap": true, 00:08:53.045 "flush": true, 00:08:53.045 "reset": true, 00:08:53.045 "nvme_admin": false, 00:08:53.046 "nvme_io": false, 00:08:53.046 "nvme_io_md": false, 00:08:53.046 "write_zeroes": true, 00:08:53.046 "zcopy": true, 00:08:53.046 "get_zone_info": false, 00:08:53.046 "zone_management": false, 00:08:53.046 "zone_append": false, 00:08:53.046 "compare": false, 00:08:53.046 "compare_and_write": false, 00:08:53.046 "abort": true, 00:08:53.046 "seek_hole": false, 00:08:53.046 "seek_data": false, 00:08:53.046 "copy": true, 00:08:53.046 "nvme_iov_md": false 00:08:53.046 }, 00:08:53.046 "memory_domains": [ 00:08:53.046 { 00:08:53.046 "dma_device_id": "system", 00:08:53.046 "dma_device_type": 1 00:08:53.046 }, 00:08:53.046 { 00:08:53.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.046 "dma_device_type": 2 00:08:53.046 } 00:08:53.046 ], 00:08:53.046 "driver_specific": {} 00:08:53.046 } 00:08:53.046 ] 00:08:53.046 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.046 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:53.046 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:53.046 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:53.046 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:53.046 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.046 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.046 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.046 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.046 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.046 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.046 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.046 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.046 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.046 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.046 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.046 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.046 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.046 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.305 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.305 "name": "Existed_Raid", 00:08:53.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.305 "strip_size_kb": 0, 00:08:53.305 "state": "configuring", 00:08:53.305 "raid_level": "raid1", 00:08:53.305 "superblock": false, 00:08:53.305 "num_base_bdevs": 3, 00:08:53.305 "num_base_bdevs_discovered": 2, 00:08:53.305 "num_base_bdevs_operational": 3, 00:08:53.305 "base_bdevs_list": [ 00:08:53.305 { 00:08:53.305 "name": "BaseBdev1", 00:08:53.305 "uuid": "f79c785a-011d-48dd-b2cb-b0b43d07cc3b", 00:08:53.305 "is_configured": true, 00:08:53.305 "data_offset": 0, 00:08:53.305 "data_size": 65536 00:08:53.305 }, 00:08:53.305 { 00:08:53.305 "name": "BaseBdev2", 00:08:53.305 "uuid": "7fc042ee-caca-43a2-b4b1-c198c7447165", 00:08:53.305 "is_configured": true, 00:08:53.305 "data_offset": 0, 00:08:53.305 "data_size": 65536 00:08:53.305 }, 00:08:53.305 { 00:08:53.305 "name": "BaseBdev3", 00:08:53.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.305 "is_configured": false, 00:08:53.305 "data_offset": 0, 00:08:53.305 "data_size": 0 00:08:53.305 } 00:08:53.305 ] 00:08:53.305 }' 00:08:53.305 01:10:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.305 01:10:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.565 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:53.565 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.565 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.565 [2024-10-15 01:10:06.172564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:53.565 [2024-10-15 01:10:06.172692] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:53.565 [2024-10-15 01:10:06.172720] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:53.565 [2024-10-15 01:10:06.173053] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:53.565 [2024-10-15 01:10:06.173260] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:53.566 [2024-10-15 01:10:06.173305] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:53.566 [2024-10-15 01:10:06.173549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.566 BaseBdev3 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.566 [ 00:08:53.566 { 00:08:53.566 "name": "BaseBdev3", 00:08:53.566 "aliases": [ 00:08:53.566 "50592e80-08ea-4957-9cd3-277ecd7a25ab" 00:08:53.566 ], 00:08:53.566 "product_name": "Malloc disk", 00:08:53.566 "block_size": 512, 00:08:53.566 "num_blocks": 65536, 00:08:53.566 "uuid": "50592e80-08ea-4957-9cd3-277ecd7a25ab", 00:08:53.566 "assigned_rate_limits": { 00:08:53.566 "rw_ios_per_sec": 0, 00:08:53.566 "rw_mbytes_per_sec": 0, 00:08:53.566 "r_mbytes_per_sec": 0, 00:08:53.566 "w_mbytes_per_sec": 0 00:08:53.566 }, 00:08:53.566 "claimed": true, 00:08:53.566 "claim_type": "exclusive_write", 00:08:53.566 "zoned": false, 00:08:53.566 "supported_io_types": { 00:08:53.566 "read": true, 00:08:53.566 "write": true, 00:08:53.566 "unmap": true, 00:08:53.566 "flush": true, 00:08:53.566 "reset": true, 00:08:53.566 "nvme_admin": false, 00:08:53.566 "nvme_io": false, 00:08:53.566 "nvme_io_md": false, 00:08:53.566 "write_zeroes": true, 00:08:53.566 "zcopy": true, 00:08:53.566 "get_zone_info": false, 00:08:53.566 "zone_management": false, 00:08:53.566 "zone_append": false, 00:08:53.566 "compare": false, 00:08:53.566 "compare_and_write": false, 00:08:53.566 "abort": true, 00:08:53.566 "seek_hole": false, 00:08:53.566 "seek_data": false, 00:08:53.566 "copy": true, 00:08:53.566 "nvme_iov_md": false 00:08:53.566 }, 00:08:53.566 "memory_domains": [ 00:08:53.566 { 00:08:53.566 "dma_device_id": "system", 00:08:53.566 "dma_device_type": 1 00:08:53.566 }, 00:08:53.566 { 00:08:53.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.566 "dma_device_type": 2 00:08:53.566 } 00:08:53.566 ], 00:08:53.566 "driver_specific": {} 00:08:53.566 } 00:08:53.566 ] 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.566 "name": "Existed_Raid", 00:08:53.566 "uuid": "a4350c28-8d9f-4644-a940-2df4270b942c", 00:08:53.566 "strip_size_kb": 0, 00:08:53.566 "state": "online", 00:08:53.566 "raid_level": "raid1", 00:08:53.566 "superblock": false, 00:08:53.566 "num_base_bdevs": 3, 00:08:53.566 "num_base_bdevs_discovered": 3, 00:08:53.566 "num_base_bdevs_operational": 3, 00:08:53.566 "base_bdevs_list": [ 00:08:53.566 { 00:08:53.566 "name": "BaseBdev1", 00:08:53.566 "uuid": "f79c785a-011d-48dd-b2cb-b0b43d07cc3b", 00:08:53.566 "is_configured": true, 00:08:53.566 "data_offset": 0, 00:08:53.566 "data_size": 65536 00:08:53.566 }, 00:08:53.566 { 00:08:53.566 "name": "BaseBdev2", 00:08:53.566 "uuid": "7fc042ee-caca-43a2-b4b1-c198c7447165", 00:08:53.566 "is_configured": true, 00:08:53.566 "data_offset": 0, 00:08:53.566 "data_size": 65536 00:08:53.566 }, 00:08:53.566 { 00:08:53.566 "name": "BaseBdev3", 00:08:53.566 "uuid": "50592e80-08ea-4957-9cd3-277ecd7a25ab", 00:08:53.566 "is_configured": true, 00:08:53.566 "data_offset": 0, 00:08:53.566 "data_size": 65536 00:08:53.566 } 00:08:53.566 ] 00:08:53.566 }' 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.566 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.134 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:54.134 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:54.134 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:54.134 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:54.134 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:54.134 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:54.134 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:54.134 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:54.134 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.134 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.134 [2024-10-15 01:10:06.691988] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.134 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.134 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:54.134 "name": "Existed_Raid", 00:08:54.134 "aliases": [ 00:08:54.134 "a4350c28-8d9f-4644-a940-2df4270b942c" 00:08:54.134 ], 00:08:54.134 "product_name": "Raid Volume", 00:08:54.134 "block_size": 512, 00:08:54.134 "num_blocks": 65536, 00:08:54.134 "uuid": "a4350c28-8d9f-4644-a940-2df4270b942c", 00:08:54.134 "assigned_rate_limits": { 00:08:54.135 "rw_ios_per_sec": 0, 00:08:54.135 "rw_mbytes_per_sec": 0, 00:08:54.135 "r_mbytes_per_sec": 0, 00:08:54.135 "w_mbytes_per_sec": 0 00:08:54.135 }, 00:08:54.135 "claimed": false, 00:08:54.135 "zoned": false, 00:08:54.135 "supported_io_types": { 00:08:54.135 "read": true, 00:08:54.135 "write": true, 00:08:54.135 "unmap": false, 00:08:54.135 "flush": false, 00:08:54.135 "reset": true, 00:08:54.135 "nvme_admin": false, 00:08:54.135 "nvme_io": false, 00:08:54.135 "nvme_io_md": false, 00:08:54.135 "write_zeroes": true, 00:08:54.135 "zcopy": false, 00:08:54.135 "get_zone_info": false, 00:08:54.135 "zone_management": false, 00:08:54.135 "zone_append": false, 00:08:54.135 "compare": false, 00:08:54.135 "compare_and_write": false, 00:08:54.135 "abort": false, 00:08:54.135 "seek_hole": false, 00:08:54.135 "seek_data": false, 00:08:54.135 "copy": false, 00:08:54.135 "nvme_iov_md": false 00:08:54.135 }, 00:08:54.135 "memory_domains": [ 00:08:54.135 { 00:08:54.135 "dma_device_id": "system", 00:08:54.135 "dma_device_type": 1 00:08:54.135 }, 00:08:54.135 { 00:08:54.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.135 "dma_device_type": 2 00:08:54.135 }, 00:08:54.135 { 00:08:54.135 "dma_device_id": "system", 00:08:54.135 "dma_device_type": 1 00:08:54.135 }, 00:08:54.135 { 00:08:54.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.135 "dma_device_type": 2 00:08:54.135 }, 00:08:54.135 { 00:08:54.135 "dma_device_id": "system", 00:08:54.135 "dma_device_type": 1 00:08:54.135 }, 00:08:54.135 { 00:08:54.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.135 "dma_device_type": 2 00:08:54.135 } 00:08:54.135 ], 00:08:54.135 "driver_specific": { 00:08:54.135 "raid": { 00:08:54.135 "uuid": "a4350c28-8d9f-4644-a940-2df4270b942c", 00:08:54.135 "strip_size_kb": 0, 00:08:54.135 "state": "online", 00:08:54.135 "raid_level": "raid1", 00:08:54.135 "superblock": false, 00:08:54.135 "num_base_bdevs": 3, 00:08:54.135 "num_base_bdevs_discovered": 3, 00:08:54.135 "num_base_bdevs_operational": 3, 00:08:54.135 "base_bdevs_list": [ 00:08:54.135 { 00:08:54.135 "name": "BaseBdev1", 00:08:54.135 "uuid": "f79c785a-011d-48dd-b2cb-b0b43d07cc3b", 00:08:54.135 "is_configured": true, 00:08:54.135 "data_offset": 0, 00:08:54.135 "data_size": 65536 00:08:54.135 }, 00:08:54.135 { 00:08:54.135 "name": "BaseBdev2", 00:08:54.135 "uuid": "7fc042ee-caca-43a2-b4b1-c198c7447165", 00:08:54.135 "is_configured": true, 00:08:54.135 "data_offset": 0, 00:08:54.135 "data_size": 65536 00:08:54.135 }, 00:08:54.135 { 00:08:54.135 "name": "BaseBdev3", 00:08:54.135 "uuid": "50592e80-08ea-4957-9cd3-277ecd7a25ab", 00:08:54.135 "is_configured": true, 00:08:54.135 "data_offset": 0, 00:08:54.135 "data_size": 65536 00:08:54.135 } 00:08:54.135 ] 00:08:54.135 } 00:08:54.135 } 00:08:54.135 }' 00:08:54.135 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:54.135 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:54.135 BaseBdev2 00:08:54.135 BaseBdev3' 00:08:54.135 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.135 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:54.135 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.135 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:54.135 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.135 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.135 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.135 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.135 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.135 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.135 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.429 [2024-10-15 01:10:06.943315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.429 01:10:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.429 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.429 "name": "Existed_Raid", 00:08:54.429 "uuid": "a4350c28-8d9f-4644-a940-2df4270b942c", 00:08:54.429 "strip_size_kb": 0, 00:08:54.429 "state": "online", 00:08:54.429 "raid_level": "raid1", 00:08:54.429 "superblock": false, 00:08:54.429 "num_base_bdevs": 3, 00:08:54.429 "num_base_bdevs_discovered": 2, 00:08:54.429 "num_base_bdevs_operational": 2, 00:08:54.429 "base_bdevs_list": [ 00:08:54.429 { 00:08:54.429 "name": null, 00:08:54.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.429 "is_configured": false, 00:08:54.429 "data_offset": 0, 00:08:54.430 "data_size": 65536 00:08:54.430 }, 00:08:54.430 { 00:08:54.430 "name": "BaseBdev2", 00:08:54.430 "uuid": "7fc042ee-caca-43a2-b4b1-c198c7447165", 00:08:54.430 "is_configured": true, 00:08:54.430 "data_offset": 0, 00:08:54.430 "data_size": 65536 00:08:54.430 }, 00:08:54.430 { 00:08:54.430 "name": "BaseBdev3", 00:08:54.430 "uuid": "50592e80-08ea-4957-9cd3-277ecd7a25ab", 00:08:54.430 "is_configured": true, 00:08:54.430 "data_offset": 0, 00:08:54.430 "data_size": 65536 00:08:54.430 } 00:08:54.430 ] 00:08:54.430 }' 00:08:54.430 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.430 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.998 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:54.998 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:54.998 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.999 [2024-10-15 01:10:07.497742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.999 [2024-10-15 01:10:07.556804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:54.999 [2024-10-15 01:10:07.556937] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.999 [2024-10-15 01:10:07.568627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.999 [2024-10-15 01:10:07.568672] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.999 [2024-10-15 01:10:07.568693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.999 BaseBdev2 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.999 [ 00:08:54.999 { 00:08:54.999 "name": "BaseBdev2", 00:08:54.999 "aliases": [ 00:08:54.999 "31bd3805-ee4e-4cb9-b3ec-c0579d5a50fa" 00:08:54.999 ], 00:08:54.999 "product_name": "Malloc disk", 00:08:54.999 "block_size": 512, 00:08:54.999 "num_blocks": 65536, 00:08:54.999 "uuid": "31bd3805-ee4e-4cb9-b3ec-c0579d5a50fa", 00:08:54.999 "assigned_rate_limits": { 00:08:54.999 "rw_ios_per_sec": 0, 00:08:54.999 "rw_mbytes_per_sec": 0, 00:08:54.999 "r_mbytes_per_sec": 0, 00:08:54.999 "w_mbytes_per_sec": 0 00:08:54.999 }, 00:08:54.999 "claimed": false, 00:08:54.999 "zoned": false, 00:08:54.999 "supported_io_types": { 00:08:54.999 "read": true, 00:08:54.999 "write": true, 00:08:54.999 "unmap": true, 00:08:54.999 "flush": true, 00:08:54.999 "reset": true, 00:08:54.999 "nvme_admin": false, 00:08:54.999 "nvme_io": false, 00:08:54.999 "nvme_io_md": false, 00:08:54.999 "write_zeroes": true, 00:08:54.999 "zcopy": true, 00:08:54.999 "get_zone_info": false, 00:08:54.999 "zone_management": false, 00:08:54.999 "zone_append": false, 00:08:54.999 "compare": false, 00:08:54.999 "compare_and_write": false, 00:08:54.999 "abort": true, 00:08:54.999 "seek_hole": false, 00:08:54.999 "seek_data": false, 00:08:54.999 "copy": true, 00:08:54.999 "nvme_iov_md": false 00:08:54.999 }, 00:08:54.999 "memory_domains": [ 00:08:54.999 { 00:08:54.999 "dma_device_id": "system", 00:08:54.999 "dma_device_type": 1 00:08:54.999 }, 00:08:54.999 { 00:08:54.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.999 "dma_device_type": 2 00:08:54.999 } 00:08:54.999 ], 00:08:54.999 "driver_specific": {} 00:08:54.999 } 00:08:54.999 ] 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.999 BaseBdev3 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.999 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.999 [ 00:08:54.999 { 00:08:54.999 "name": "BaseBdev3", 00:08:54.999 "aliases": [ 00:08:54.999 "8172561e-f1fb-439e-a1eb-4851e23ba723" 00:08:54.999 ], 00:08:54.999 "product_name": "Malloc disk", 00:08:54.999 "block_size": 512, 00:08:54.999 "num_blocks": 65536, 00:08:54.999 "uuid": "8172561e-f1fb-439e-a1eb-4851e23ba723", 00:08:54.999 "assigned_rate_limits": { 00:08:54.999 "rw_ios_per_sec": 0, 00:08:54.999 "rw_mbytes_per_sec": 0, 00:08:55.259 "r_mbytes_per_sec": 0, 00:08:55.259 "w_mbytes_per_sec": 0 00:08:55.259 }, 00:08:55.259 "claimed": false, 00:08:55.259 "zoned": false, 00:08:55.259 "supported_io_types": { 00:08:55.259 "read": true, 00:08:55.259 "write": true, 00:08:55.259 "unmap": true, 00:08:55.259 "flush": true, 00:08:55.259 "reset": true, 00:08:55.259 "nvme_admin": false, 00:08:55.259 "nvme_io": false, 00:08:55.259 "nvme_io_md": false, 00:08:55.259 "write_zeroes": true, 00:08:55.259 "zcopy": true, 00:08:55.259 "get_zone_info": false, 00:08:55.259 "zone_management": false, 00:08:55.259 "zone_append": false, 00:08:55.259 "compare": false, 00:08:55.259 "compare_and_write": false, 00:08:55.259 "abort": true, 00:08:55.259 "seek_hole": false, 00:08:55.259 "seek_data": false, 00:08:55.259 "copy": true, 00:08:55.259 "nvme_iov_md": false 00:08:55.259 }, 00:08:55.259 "memory_domains": [ 00:08:55.259 { 00:08:55.259 "dma_device_id": "system", 00:08:55.259 "dma_device_type": 1 00:08:55.259 }, 00:08:55.259 { 00:08:55.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.259 "dma_device_type": 2 00:08:55.259 } 00:08:55.259 ], 00:08:55.259 "driver_specific": {} 00:08:55.259 } 00:08:55.259 ] 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.259 [2024-10-15 01:10:07.741030] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.259 [2024-10-15 01:10:07.741120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.259 [2024-10-15 01:10:07.741161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:55.259 [2024-10-15 01:10:07.743029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.259 "name": "Existed_Raid", 00:08:55.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.259 "strip_size_kb": 0, 00:08:55.259 "state": "configuring", 00:08:55.259 "raid_level": "raid1", 00:08:55.259 "superblock": false, 00:08:55.259 "num_base_bdevs": 3, 00:08:55.259 "num_base_bdevs_discovered": 2, 00:08:55.259 "num_base_bdevs_operational": 3, 00:08:55.259 "base_bdevs_list": [ 00:08:55.259 { 00:08:55.259 "name": "BaseBdev1", 00:08:55.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.259 "is_configured": false, 00:08:55.259 "data_offset": 0, 00:08:55.259 "data_size": 0 00:08:55.259 }, 00:08:55.259 { 00:08:55.259 "name": "BaseBdev2", 00:08:55.259 "uuid": "31bd3805-ee4e-4cb9-b3ec-c0579d5a50fa", 00:08:55.259 "is_configured": true, 00:08:55.259 "data_offset": 0, 00:08:55.259 "data_size": 65536 00:08:55.259 }, 00:08:55.259 { 00:08:55.259 "name": "BaseBdev3", 00:08:55.259 "uuid": "8172561e-f1fb-439e-a1eb-4851e23ba723", 00:08:55.259 "is_configured": true, 00:08:55.259 "data_offset": 0, 00:08:55.259 "data_size": 65536 00:08:55.259 } 00:08:55.259 ] 00:08:55.259 }' 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.259 01:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.519 [2024-10-15 01:10:08.164318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.519 "name": "Existed_Raid", 00:08:55.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.519 "strip_size_kb": 0, 00:08:55.519 "state": "configuring", 00:08:55.519 "raid_level": "raid1", 00:08:55.519 "superblock": false, 00:08:55.519 "num_base_bdevs": 3, 00:08:55.519 "num_base_bdevs_discovered": 1, 00:08:55.519 "num_base_bdevs_operational": 3, 00:08:55.519 "base_bdevs_list": [ 00:08:55.519 { 00:08:55.519 "name": "BaseBdev1", 00:08:55.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.519 "is_configured": false, 00:08:55.519 "data_offset": 0, 00:08:55.519 "data_size": 0 00:08:55.519 }, 00:08:55.519 { 00:08:55.519 "name": null, 00:08:55.519 "uuid": "31bd3805-ee4e-4cb9-b3ec-c0579d5a50fa", 00:08:55.519 "is_configured": false, 00:08:55.519 "data_offset": 0, 00:08:55.519 "data_size": 65536 00:08:55.519 }, 00:08:55.519 { 00:08:55.519 "name": "BaseBdev3", 00:08:55.519 "uuid": "8172561e-f1fb-439e-a1eb-4851e23ba723", 00:08:55.519 "is_configured": true, 00:08:55.519 "data_offset": 0, 00:08:55.519 "data_size": 65536 00:08:55.519 } 00:08:55.519 ] 00:08:55.519 }' 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.519 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.777 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.777 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.777 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.777 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:55.777 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.036 [2024-10-15 01:10:08.538722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.036 BaseBdev1 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.036 [ 00:08:56.036 { 00:08:56.036 "name": "BaseBdev1", 00:08:56.036 "aliases": [ 00:08:56.036 "03d70724-1a54-4191-99da-0373f208582e" 00:08:56.036 ], 00:08:56.036 "product_name": "Malloc disk", 00:08:56.036 "block_size": 512, 00:08:56.036 "num_blocks": 65536, 00:08:56.036 "uuid": "03d70724-1a54-4191-99da-0373f208582e", 00:08:56.036 "assigned_rate_limits": { 00:08:56.036 "rw_ios_per_sec": 0, 00:08:56.036 "rw_mbytes_per_sec": 0, 00:08:56.036 "r_mbytes_per_sec": 0, 00:08:56.036 "w_mbytes_per_sec": 0 00:08:56.036 }, 00:08:56.036 "claimed": true, 00:08:56.036 "claim_type": "exclusive_write", 00:08:56.036 "zoned": false, 00:08:56.036 "supported_io_types": { 00:08:56.036 "read": true, 00:08:56.036 "write": true, 00:08:56.036 "unmap": true, 00:08:56.036 "flush": true, 00:08:56.036 "reset": true, 00:08:56.036 "nvme_admin": false, 00:08:56.036 "nvme_io": false, 00:08:56.036 "nvme_io_md": false, 00:08:56.036 "write_zeroes": true, 00:08:56.036 "zcopy": true, 00:08:56.036 "get_zone_info": false, 00:08:56.036 "zone_management": false, 00:08:56.036 "zone_append": false, 00:08:56.036 "compare": false, 00:08:56.036 "compare_and_write": false, 00:08:56.036 "abort": true, 00:08:56.036 "seek_hole": false, 00:08:56.036 "seek_data": false, 00:08:56.036 "copy": true, 00:08:56.036 "nvme_iov_md": false 00:08:56.036 }, 00:08:56.036 "memory_domains": [ 00:08:56.036 { 00:08:56.036 "dma_device_id": "system", 00:08:56.036 "dma_device_type": 1 00:08:56.036 }, 00:08:56.036 { 00:08:56.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.036 "dma_device_type": 2 00:08:56.036 } 00:08:56.036 ], 00:08:56.036 "driver_specific": {} 00:08:56.036 } 00:08:56.036 ] 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.036 "name": "Existed_Raid", 00:08:56.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.036 "strip_size_kb": 0, 00:08:56.036 "state": "configuring", 00:08:56.036 "raid_level": "raid1", 00:08:56.036 "superblock": false, 00:08:56.036 "num_base_bdevs": 3, 00:08:56.036 "num_base_bdevs_discovered": 2, 00:08:56.036 "num_base_bdevs_operational": 3, 00:08:56.036 "base_bdevs_list": [ 00:08:56.036 { 00:08:56.036 "name": "BaseBdev1", 00:08:56.036 "uuid": "03d70724-1a54-4191-99da-0373f208582e", 00:08:56.036 "is_configured": true, 00:08:56.036 "data_offset": 0, 00:08:56.036 "data_size": 65536 00:08:56.036 }, 00:08:56.036 { 00:08:56.036 "name": null, 00:08:56.036 "uuid": "31bd3805-ee4e-4cb9-b3ec-c0579d5a50fa", 00:08:56.036 "is_configured": false, 00:08:56.036 "data_offset": 0, 00:08:56.036 "data_size": 65536 00:08:56.036 }, 00:08:56.036 { 00:08:56.036 "name": "BaseBdev3", 00:08:56.036 "uuid": "8172561e-f1fb-439e-a1eb-4851e23ba723", 00:08:56.036 "is_configured": true, 00:08:56.036 "data_offset": 0, 00:08:56.036 "data_size": 65536 00:08:56.036 } 00:08:56.036 ] 00:08:56.036 }' 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.036 01:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.606 [2024-10-15 01:10:09.073935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.606 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.607 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.607 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.607 "name": "Existed_Raid", 00:08:56.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.607 "strip_size_kb": 0, 00:08:56.607 "state": "configuring", 00:08:56.607 "raid_level": "raid1", 00:08:56.607 "superblock": false, 00:08:56.607 "num_base_bdevs": 3, 00:08:56.607 "num_base_bdevs_discovered": 1, 00:08:56.607 "num_base_bdevs_operational": 3, 00:08:56.607 "base_bdevs_list": [ 00:08:56.607 { 00:08:56.607 "name": "BaseBdev1", 00:08:56.607 "uuid": "03d70724-1a54-4191-99da-0373f208582e", 00:08:56.607 "is_configured": true, 00:08:56.607 "data_offset": 0, 00:08:56.607 "data_size": 65536 00:08:56.607 }, 00:08:56.607 { 00:08:56.607 "name": null, 00:08:56.607 "uuid": "31bd3805-ee4e-4cb9-b3ec-c0579d5a50fa", 00:08:56.607 "is_configured": false, 00:08:56.607 "data_offset": 0, 00:08:56.607 "data_size": 65536 00:08:56.607 }, 00:08:56.607 { 00:08:56.607 "name": null, 00:08:56.607 "uuid": "8172561e-f1fb-439e-a1eb-4851e23ba723", 00:08:56.607 "is_configured": false, 00:08:56.607 "data_offset": 0, 00:08:56.607 "data_size": 65536 00:08:56.607 } 00:08:56.607 ] 00:08:56.607 }' 00:08:56.607 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.607 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.866 [2024-10-15 01:10:09.549125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.866 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.866 "name": "Existed_Raid", 00:08:56.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.866 "strip_size_kb": 0, 00:08:56.866 "state": "configuring", 00:08:56.866 "raid_level": "raid1", 00:08:56.866 "superblock": false, 00:08:56.866 "num_base_bdevs": 3, 00:08:56.866 "num_base_bdevs_discovered": 2, 00:08:56.866 "num_base_bdevs_operational": 3, 00:08:56.866 "base_bdevs_list": [ 00:08:56.866 { 00:08:56.866 "name": "BaseBdev1", 00:08:56.866 "uuid": "03d70724-1a54-4191-99da-0373f208582e", 00:08:56.866 "is_configured": true, 00:08:56.866 "data_offset": 0, 00:08:56.866 "data_size": 65536 00:08:56.866 }, 00:08:56.866 { 00:08:56.866 "name": null, 00:08:56.866 "uuid": "31bd3805-ee4e-4cb9-b3ec-c0579d5a50fa", 00:08:56.866 "is_configured": false, 00:08:56.866 "data_offset": 0, 00:08:56.866 "data_size": 65536 00:08:56.866 }, 00:08:56.866 { 00:08:56.866 "name": "BaseBdev3", 00:08:56.866 "uuid": "8172561e-f1fb-439e-a1eb-4851e23ba723", 00:08:56.866 "is_configured": true, 00:08:56.866 "data_offset": 0, 00:08:56.866 "data_size": 65536 00:08:56.866 } 00:08:56.866 ] 00:08:56.866 }' 00:08:57.126 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.126 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.385 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:57.385 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.385 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.385 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.385 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.385 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:57.385 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:57.385 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.385 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.385 [2024-10-15 01:10:09.984391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:57.385 01:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.385 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:57.385 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.385 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.385 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.385 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.385 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.385 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.385 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.385 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.385 01:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.385 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.385 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.385 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.385 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.385 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.385 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.385 "name": "Existed_Raid", 00:08:57.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.385 "strip_size_kb": 0, 00:08:57.385 "state": "configuring", 00:08:57.385 "raid_level": "raid1", 00:08:57.385 "superblock": false, 00:08:57.385 "num_base_bdevs": 3, 00:08:57.385 "num_base_bdevs_discovered": 1, 00:08:57.385 "num_base_bdevs_operational": 3, 00:08:57.385 "base_bdevs_list": [ 00:08:57.385 { 00:08:57.385 "name": null, 00:08:57.385 "uuid": "03d70724-1a54-4191-99da-0373f208582e", 00:08:57.385 "is_configured": false, 00:08:57.385 "data_offset": 0, 00:08:57.385 "data_size": 65536 00:08:57.385 }, 00:08:57.385 { 00:08:57.385 "name": null, 00:08:57.386 "uuid": "31bd3805-ee4e-4cb9-b3ec-c0579d5a50fa", 00:08:57.386 "is_configured": false, 00:08:57.386 "data_offset": 0, 00:08:57.386 "data_size": 65536 00:08:57.386 }, 00:08:57.386 { 00:08:57.386 "name": "BaseBdev3", 00:08:57.386 "uuid": "8172561e-f1fb-439e-a1eb-4851e23ba723", 00:08:57.386 "is_configured": true, 00:08:57.386 "data_offset": 0, 00:08:57.386 "data_size": 65536 00:08:57.386 } 00:08:57.386 ] 00:08:57.386 }' 00:08:57.386 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.386 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.955 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.955 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:57.955 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.955 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.955 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.955 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:57.955 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:57.955 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.955 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.955 [2024-10-15 01:10:10.438069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:57.955 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.955 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:57.955 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.955 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.956 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.956 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.956 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.956 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.956 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.956 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.956 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.956 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.956 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.956 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.956 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.956 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.956 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.956 "name": "Existed_Raid", 00:08:57.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.956 "strip_size_kb": 0, 00:08:57.956 "state": "configuring", 00:08:57.956 "raid_level": "raid1", 00:08:57.956 "superblock": false, 00:08:57.956 "num_base_bdevs": 3, 00:08:57.956 "num_base_bdevs_discovered": 2, 00:08:57.956 "num_base_bdevs_operational": 3, 00:08:57.956 "base_bdevs_list": [ 00:08:57.956 { 00:08:57.956 "name": null, 00:08:57.956 "uuid": "03d70724-1a54-4191-99da-0373f208582e", 00:08:57.956 "is_configured": false, 00:08:57.956 "data_offset": 0, 00:08:57.956 "data_size": 65536 00:08:57.956 }, 00:08:57.956 { 00:08:57.956 "name": "BaseBdev2", 00:08:57.956 "uuid": "31bd3805-ee4e-4cb9-b3ec-c0579d5a50fa", 00:08:57.956 "is_configured": true, 00:08:57.956 "data_offset": 0, 00:08:57.956 "data_size": 65536 00:08:57.956 }, 00:08:57.956 { 00:08:57.956 "name": "BaseBdev3", 00:08:57.956 "uuid": "8172561e-f1fb-439e-a1eb-4851e23ba723", 00:08:57.956 "is_configured": true, 00:08:57.956 "data_offset": 0, 00:08:57.956 "data_size": 65536 00:08:57.956 } 00:08:57.956 ] 00:08:57.956 }' 00:08:57.956 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.956 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.216 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.216 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.216 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.216 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:58.216 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.216 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:58.216 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.216 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.216 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.216 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:58.476 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.476 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 03d70724-1a54-4191-99da-0373f208582e 00:08:58.476 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.476 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.476 [2024-10-15 01:10:10.992115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:58.476 [2024-10-15 01:10:10.992165] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:58.476 [2024-10-15 01:10:10.992173] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:58.476 [2024-10-15 01:10:10.992427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:58.476 [2024-10-15 01:10:10.992535] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:58.476 [2024-10-15 01:10:10.992548] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:58.476 [2024-10-15 01:10:10.992738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.476 NewBaseBdev 00:08:58.476 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.476 01:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:58.476 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:58.476 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:58.476 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:58.476 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:58.476 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:58.476 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:58.476 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.476 01:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.476 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.476 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:58.476 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.476 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.476 [ 00:08:58.476 { 00:08:58.476 "name": "NewBaseBdev", 00:08:58.476 "aliases": [ 00:08:58.476 "03d70724-1a54-4191-99da-0373f208582e" 00:08:58.476 ], 00:08:58.476 "product_name": "Malloc disk", 00:08:58.476 "block_size": 512, 00:08:58.476 "num_blocks": 65536, 00:08:58.476 "uuid": "03d70724-1a54-4191-99da-0373f208582e", 00:08:58.476 "assigned_rate_limits": { 00:08:58.476 "rw_ios_per_sec": 0, 00:08:58.476 "rw_mbytes_per_sec": 0, 00:08:58.476 "r_mbytes_per_sec": 0, 00:08:58.476 "w_mbytes_per_sec": 0 00:08:58.476 }, 00:08:58.476 "claimed": true, 00:08:58.476 "claim_type": "exclusive_write", 00:08:58.476 "zoned": false, 00:08:58.476 "supported_io_types": { 00:08:58.476 "read": true, 00:08:58.476 "write": true, 00:08:58.476 "unmap": true, 00:08:58.476 "flush": true, 00:08:58.476 "reset": true, 00:08:58.476 "nvme_admin": false, 00:08:58.476 "nvme_io": false, 00:08:58.476 "nvme_io_md": false, 00:08:58.476 "write_zeroes": true, 00:08:58.476 "zcopy": true, 00:08:58.476 "get_zone_info": false, 00:08:58.476 "zone_management": false, 00:08:58.476 "zone_append": false, 00:08:58.476 "compare": false, 00:08:58.476 "compare_and_write": false, 00:08:58.476 "abort": true, 00:08:58.476 "seek_hole": false, 00:08:58.476 "seek_data": false, 00:08:58.476 "copy": true, 00:08:58.476 "nvme_iov_md": false 00:08:58.476 }, 00:08:58.476 "memory_domains": [ 00:08:58.476 { 00:08:58.476 "dma_device_id": "system", 00:08:58.476 "dma_device_type": 1 00:08:58.476 }, 00:08:58.476 { 00:08:58.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.476 "dma_device_type": 2 00:08:58.476 } 00:08:58.476 ], 00:08:58.476 "driver_specific": {} 00:08:58.476 } 00:08:58.476 ] 00:08:58.476 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.476 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:58.476 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:58.476 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.476 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.476 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.476 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.476 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.476 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.476 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.476 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.476 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.477 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.477 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.477 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.477 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.477 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.477 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.477 "name": "Existed_Raid", 00:08:58.477 "uuid": "e707d6e4-d29a-4b3c-a52b-0dde2c71cb59", 00:08:58.477 "strip_size_kb": 0, 00:08:58.477 "state": "online", 00:08:58.477 "raid_level": "raid1", 00:08:58.477 "superblock": false, 00:08:58.477 "num_base_bdevs": 3, 00:08:58.477 "num_base_bdevs_discovered": 3, 00:08:58.477 "num_base_bdevs_operational": 3, 00:08:58.477 "base_bdevs_list": [ 00:08:58.477 { 00:08:58.477 "name": "NewBaseBdev", 00:08:58.477 "uuid": "03d70724-1a54-4191-99da-0373f208582e", 00:08:58.477 "is_configured": true, 00:08:58.477 "data_offset": 0, 00:08:58.477 "data_size": 65536 00:08:58.477 }, 00:08:58.477 { 00:08:58.477 "name": "BaseBdev2", 00:08:58.477 "uuid": "31bd3805-ee4e-4cb9-b3ec-c0579d5a50fa", 00:08:58.477 "is_configured": true, 00:08:58.477 "data_offset": 0, 00:08:58.477 "data_size": 65536 00:08:58.477 }, 00:08:58.477 { 00:08:58.477 "name": "BaseBdev3", 00:08:58.477 "uuid": "8172561e-f1fb-439e-a1eb-4851e23ba723", 00:08:58.477 "is_configured": true, 00:08:58.477 "data_offset": 0, 00:08:58.477 "data_size": 65536 00:08:58.477 } 00:08:58.477 ] 00:08:58.477 }' 00:08:58.477 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.477 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.736 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:58.736 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:58.736 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:58.736 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:58.736 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:58.736 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:58.736 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:58.736 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.736 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.736 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:58.736 [2024-10-15 01:10:11.411741] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.736 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.737 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:58.737 "name": "Existed_Raid", 00:08:58.737 "aliases": [ 00:08:58.737 "e707d6e4-d29a-4b3c-a52b-0dde2c71cb59" 00:08:58.737 ], 00:08:58.737 "product_name": "Raid Volume", 00:08:58.737 "block_size": 512, 00:08:58.737 "num_blocks": 65536, 00:08:58.737 "uuid": "e707d6e4-d29a-4b3c-a52b-0dde2c71cb59", 00:08:58.737 "assigned_rate_limits": { 00:08:58.737 "rw_ios_per_sec": 0, 00:08:58.737 "rw_mbytes_per_sec": 0, 00:08:58.737 "r_mbytes_per_sec": 0, 00:08:58.737 "w_mbytes_per_sec": 0 00:08:58.737 }, 00:08:58.737 "claimed": false, 00:08:58.737 "zoned": false, 00:08:58.737 "supported_io_types": { 00:08:58.737 "read": true, 00:08:58.737 "write": true, 00:08:58.737 "unmap": false, 00:08:58.737 "flush": false, 00:08:58.737 "reset": true, 00:08:58.737 "nvme_admin": false, 00:08:58.737 "nvme_io": false, 00:08:58.737 "nvme_io_md": false, 00:08:58.737 "write_zeroes": true, 00:08:58.737 "zcopy": false, 00:08:58.737 "get_zone_info": false, 00:08:58.737 "zone_management": false, 00:08:58.737 "zone_append": false, 00:08:58.737 "compare": false, 00:08:58.737 "compare_and_write": false, 00:08:58.737 "abort": false, 00:08:58.737 "seek_hole": false, 00:08:58.737 "seek_data": false, 00:08:58.737 "copy": false, 00:08:58.737 "nvme_iov_md": false 00:08:58.737 }, 00:08:58.737 "memory_domains": [ 00:08:58.737 { 00:08:58.737 "dma_device_id": "system", 00:08:58.737 "dma_device_type": 1 00:08:58.737 }, 00:08:58.737 { 00:08:58.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.737 "dma_device_type": 2 00:08:58.737 }, 00:08:58.737 { 00:08:58.737 "dma_device_id": "system", 00:08:58.737 "dma_device_type": 1 00:08:58.737 }, 00:08:58.737 { 00:08:58.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.737 "dma_device_type": 2 00:08:58.737 }, 00:08:58.737 { 00:08:58.737 "dma_device_id": "system", 00:08:58.737 "dma_device_type": 1 00:08:58.737 }, 00:08:58.737 { 00:08:58.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.737 "dma_device_type": 2 00:08:58.737 } 00:08:58.737 ], 00:08:58.737 "driver_specific": { 00:08:58.737 "raid": { 00:08:58.737 "uuid": "e707d6e4-d29a-4b3c-a52b-0dde2c71cb59", 00:08:58.737 "strip_size_kb": 0, 00:08:58.737 "state": "online", 00:08:58.737 "raid_level": "raid1", 00:08:58.737 "superblock": false, 00:08:58.737 "num_base_bdevs": 3, 00:08:58.737 "num_base_bdevs_discovered": 3, 00:08:58.737 "num_base_bdevs_operational": 3, 00:08:58.737 "base_bdevs_list": [ 00:08:58.737 { 00:08:58.737 "name": "NewBaseBdev", 00:08:58.737 "uuid": "03d70724-1a54-4191-99da-0373f208582e", 00:08:58.737 "is_configured": true, 00:08:58.737 "data_offset": 0, 00:08:58.737 "data_size": 65536 00:08:58.737 }, 00:08:58.737 { 00:08:58.737 "name": "BaseBdev2", 00:08:58.737 "uuid": "31bd3805-ee4e-4cb9-b3ec-c0579d5a50fa", 00:08:58.737 "is_configured": true, 00:08:58.737 "data_offset": 0, 00:08:58.737 "data_size": 65536 00:08:58.737 }, 00:08:58.737 { 00:08:58.737 "name": "BaseBdev3", 00:08:58.737 "uuid": "8172561e-f1fb-439e-a1eb-4851e23ba723", 00:08:58.737 "is_configured": true, 00:08:58.737 "data_offset": 0, 00:08:58.737 "data_size": 65536 00:08:58.737 } 00:08:58.737 ] 00:08:58.737 } 00:08:58.737 } 00:08:58.737 }' 00:08:58.737 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:58.997 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:58.997 BaseBdev2 00:08:58.997 BaseBdev3' 00:08:58.997 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.997 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:58.997 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.997 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:58.997 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.997 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.997 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.997 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.997 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.997 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.997 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.997 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.998 [2024-10-15 01:10:11.670981] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.998 [2024-10-15 01:10:11.671050] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.998 [2024-10-15 01:10:11.671152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.998 [2024-10-15 01:10:11.671424] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.998 [2024-10-15 01:10:11.671477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78199 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 78199 ']' 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 78199 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78199 00:08:58.998 killing process with pid 78199 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78199' 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 78199 00:08:58.998 [2024-10-15 01:10:11.719345] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:58.998 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 78199 00:08:59.258 [2024-10-15 01:10:11.750984] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:59.258 01:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:59.258 ************************************ 00:08:59.258 END TEST raid_state_function_test 00:08:59.258 00:08:59.258 real 0m8.606s 00:08:59.258 user 0m14.716s 00:08:59.258 sys 0m1.760s 00:08:59.258 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.258 01:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.258 ************************************ 00:08:59.518 01:10:12 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:08:59.518 01:10:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:59.518 01:10:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.518 01:10:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:59.518 ************************************ 00:08:59.518 START TEST raid_state_function_test_sb 00:08:59.518 ************************************ 00:08:59.518 01:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:08:59.518 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:59.518 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:59.518 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:59.518 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:59.518 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:59.518 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:59.518 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:59.518 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:59.518 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:59.518 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:59.518 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:59.518 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:59.518 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:59.518 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:59.518 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:59.519 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:59.519 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:59.519 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:59.519 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:59.519 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:59.519 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:59.519 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:59.519 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:59.519 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:59.519 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:59.519 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=78804 00:08:59.519 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:59.519 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78804' 00:08:59.519 Process raid pid: 78804 00:08:59.519 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 78804 00:08:59.519 01:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 78804 ']' 00:08:59.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.519 01:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.519 01:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.519 01:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.519 01:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.519 01:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.519 [2024-10-15 01:10:12.127237] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:08:59.519 [2024-10-15 01:10:12.127358] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.779 [2024-10-15 01:10:12.274376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.779 [2024-10-15 01:10:12.302688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.779 [2024-10-15 01:10:12.345459] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.779 [2024-10-15 01:10:12.345500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.348 01:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:00.348 01:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:00.348 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:00.348 01:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.348 01:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.348 [2024-10-15 01:10:12.963427] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:00.348 [2024-10-15 01:10:12.963476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:00.348 [2024-10-15 01:10:12.963487] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:00.348 [2024-10-15 01:10:12.963497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:00.348 [2024-10-15 01:10:12.963503] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:00.348 [2024-10-15 01:10:12.963609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:00.348 01:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.348 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:00.348 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.348 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.348 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.348 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.348 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.348 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.348 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.348 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.348 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.348 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.348 01:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.348 01:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.348 01:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.349 01:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.349 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.349 "name": "Existed_Raid", 00:09:00.349 "uuid": "db50e5f8-3b09-41a4-a162-e0e12bc5cbed", 00:09:00.349 "strip_size_kb": 0, 00:09:00.349 "state": "configuring", 00:09:00.349 "raid_level": "raid1", 00:09:00.349 "superblock": true, 00:09:00.349 "num_base_bdevs": 3, 00:09:00.349 "num_base_bdevs_discovered": 0, 00:09:00.349 "num_base_bdevs_operational": 3, 00:09:00.349 "base_bdevs_list": [ 00:09:00.349 { 00:09:00.349 "name": "BaseBdev1", 00:09:00.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.349 "is_configured": false, 00:09:00.349 "data_offset": 0, 00:09:00.349 "data_size": 0 00:09:00.349 }, 00:09:00.349 { 00:09:00.349 "name": "BaseBdev2", 00:09:00.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.349 "is_configured": false, 00:09:00.349 "data_offset": 0, 00:09:00.349 "data_size": 0 00:09:00.349 }, 00:09:00.349 { 00:09:00.349 "name": "BaseBdev3", 00:09:00.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.349 "is_configured": false, 00:09:00.349 "data_offset": 0, 00:09:00.349 "data_size": 0 00:09:00.349 } 00:09:00.349 ] 00:09:00.349 }' 00:09:00.349 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.349 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.919 [2024-10-15 01:10:13.370635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:00.919 [2024-10-15 01:10:13.370744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.919 [2024-10-15 01:10:13.382651] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:00.919 [2024-10-15 01:10:13.382733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:00.919 [2024-10-15 01:10:13.382763] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:00.919 [2024-10-15 01:10:13.382786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:00.919 [2024-10-15 01:10:13.382805] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:00.919 [2024-10-15 01:10:13.382826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.919 [2024-10-15 01:10:13.403530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.919 BaseBdev1 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.919 [ 00:09:00.919 { 00:09:00.919 "name": "BaseBdev1", 00:09:00.919 "aliases": [ 00:09:00.919 "60580bdb-da00-42c4-8d1c-98ec85f7d7e2" 00:09:00.919 ], 00:09:00.919 "product_name": "Malloc disk", 00:09:00.919 "block_size": 512, 00:09:00.919 "num_blocks": 65536, 00:09:00.919 "uuid": "60580bdb-da00-42c4-8d1c-98ec85f7d7e2", 00:09:00.919 "assigned_rate_limits": { 00:09:00.919 "rw_ios_per_sec": 0, 00:09:00.919 "rw_mbytes_per_sec": 0, 00:09:00.919 "r_mbytes_per_sec": 0, 00:09:00.919 "w_mbytes_per_sec": 0 00:09:00.919 }, 00:09:00.919 "claimed": true, 00:09:00.919 "claim_type": "exclusive_write", 00:09:00.919 "zoned": false, 00:09:00.919 "supported_io_types": { 00:09:00.919 "read": true, 00:09:00.919 "write": true, 00:09:00.919 "unmap": true, 00:09:00.919 "flush": true, 00:09:00.919 "reset": true, 00:09:00.919 "nvme_admin": false, 00:09:00.919 "nvme_io": false, 00:09:00.919 "nvme_io_md": false, 00:09:00.919 "write_zeroes": true, 00:09:00.919 "zcopy": true, 00:09:00.919 "get_zone_info": false, 00:09:00.919 "zone_management": false, 00:09:00.919 "zone_append": false, 00:09:00.919 "compare": false, 00:09:00.919 "compare_and_write": false, 00:09:00.919 "abort": true, 00:09:00.919 "seek_hole": false, 00:09:00.919 "seek_data": false, 00:09:00.919 "copy": true, 00:09:00.919 "nvme_iov_md": false 00:09:00.919 }, 00:09:00.919 "memory_domains": [ 00:09:00.919 { 00:09:00.919 "dma_device_id": "system", 00:09:00.919 "dma_device_type": 1 00:09:00.919 }, 00:09:00.919 { 00:09:00.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.919 "dma_device_type": 2 00:09:00.919 } 00:09:00.919 ], 00:09:00.919 "driver_specific": {} 00:09:00.919 } 00:09:00.919 ] 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.919 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.919 "name": "Existed_Raid", 00:09:00.919 "uuid": "70b32dbe-4ea4-4caa-a05a-fd78440599ee", 00:09:00.919 "strip_size_kb": 0, 00:09:00.919 "state": "configuring", 00:09:00.919 "raid_level": "raid1", 00:09:00.919 "superblock": true, 00:09:00.920 "num_base_bdevs": 3, 00:09:00.920 "num_base_bdevs_discovered": 1, 00:09:00.920 "num_base_bdevs_operational": 3, 00:09:00.920 "base_bdevs_list": [ 00:09:00.920 { 00:09:00.920 "name": "BaseBdev1", 00:09:00.920 "uuid": "60580bdb-da00-42c4-8d1c-98ec85f7d7e2", 00:09:00.920 "is_configured": true, 00:09:00.920 "data_offset": 2048, 00:09:00.920 "data_size": 63488 00:09:00.920 }, 00:09:00.920 { 00:09:00.920 "name": "BaseBdev2", 00:09:00.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.920 "is_configured": false, 00:09:00.920 "data_offset": 0, 00:09:00.920 "data_size": 0 00:09:00.920 }, 00:09:00.920 { 00:09:00.920 "name": "BaseBdev3", 00:09:00.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.920 "is_configured": false, 00:09:00.920 "data_offset": 0, 00:09:00.920 "data_size": 0 00:09:00.920 } 00:09:00.920 ] 00:09:00.920 }' 00:09:00.920 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.920 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.180 [2024-10-15 01:10:13.874774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:01.180 [2024-10-15 01:10:13.874827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.180 [2024-10-15 01:10:13.886804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:01.180 [2024-10-15 01:10:13.888631] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:01.180 [2024-10-15 01:10:13.888675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:01.180 [2024-10-15 01:10:13.888684] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:01.180 [2024-10-15 01:10:13.888695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.180 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.440 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.440 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.440 "name": "Existed_Raid", 00:09:01.440 "uuid": "893c2d78-557f-4740-80e7-b26345317102", 00:09:01.440 "strip_size_kb": 0, 00:09:01.440 "state": "configuring", 00:09:01.440 "raid_level": "raid1", 00:09:01.440 "superblock": true, 00:09:01.440 "num_base_bdevs": 3, 00:09:01.440 "num_base_bdevs_discovered": 1, 00:09:01.440 "num_base_bdevs_operational": 3, 00:09:01.440 "base_bdevs_list": [ 00:09:01.440 { 00:09:01.440 "name": "BaseBdev1", 00:09:01.440 "uuid": "60580bdb-da00-42c4-8d1c-98ec85f7d7e2", 00:09:01.440 "is_configured": true, 00:09:01.440 "data_offset": 2048, 00:09:01.440 "data_size": 63488 00:09:01.440 }, 00:09:01.440 { 00:09:01.440 "name": "BaseBdev2", 00:09:01.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.440 "is_configured": false, 00:09:01.440 "data_offset": 0, 00:09:01.440 "data_size": 0 00:09:01.440 }, 00:09:01.440 { 00:09:01.440 "name": "BaseBdev3", 00:09:01.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.440 "is_configured": false, 00:09:01.440 "data_offset": 0, 00:09:01.440 "data_size": 0 00:09:01.440 } 00:09:01.440 ] 00:09:01.440 }' 00:09:01.440 01:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.440 01:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.700 [2024-10-15 01:10:14.325039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.700 BaseBdev2 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.700 [ 00:09:01.700 { 00:09:01.700 "name": "BaseBdev2", 00:09:01.700 "aliases": [ 00:09:01.700 "020f2aff-c083-487e-9888-5ce02898eca2" 00:09:01.700 ], 00:09:01.700 "product_name": "Malloc disk", 00:09:01.700 "block_size": 512, 00:09:01.700 "num_blocks": 65536, 00:09:01.700 "uuid": "020f2aff-c083-487e-9888-5ce02898eca2", 00:09:01.700 "assigned_rate_limits": { 00:09:01.700 "rw_ios_per_sec": 0, 00:09:01.700 "rw_mbytes_per_sec": 0, 00:09:01.700 "r_mbytes_per_sec": 0, 00:09:01.700 "w_mbytes_per_sec": 0 00:09:01.700 }, 00:09:01.700 "claimed": true, 00:09:01.700 "claim_type": "exclusive_write", 00:09:01.700 "zoned": false, 00:09:01.700 "supported_io_types": { 00:09:01.700 "read": true, 00:09:01.700 "write": true, 00:09:01.700 "unmap": true, 00:09:01.700 "flush": true, 00:09:01.700 "reset": true, 00:09:01.700 "nvme_admin": false, 00:09:01.700 "nvme_io": false, 00:09:01.700 "nvme_io_md": false, 00:09:01.700 "write_zeroes": true, 00:09:01.700 "zcopy": true, 00:09:01.700 "get_zone_info": false, 00:09:01.700 "zone_management": false, 00:09:01.700 "zone_append": false, 00:09:01.700 "compare": false, 00:09:01.700 "compare_and_write": false, 00:09:01.700 "abort": true, 00:09:01.700 "seek_hole": false, 00:09:01.700 "seek_data": false, 00:09:01.700 "copy": true, 00:09:01.700 "nvme_iov_md": false 00:09:01.700 }, 00:09:01.700 "memory_domains": [ 00:09:01.700 { 00:09:01.700 "dma_device_id": "system", 00:09:01.700 "dma_device_type": 1 00:09:01.700 }, 00:09:01.700 { 00:09:01.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.700 "dma_device_type": 2 00:09:01.700 } 00:09:01.700 ], 00:09:01.700 "driver_specific": {} 00:09:01.700 } 00:09:01.700 ] 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.700 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.700 "name": "Existed_Raid", 00:09:01.700 "uuid": "893c2d78-557f-4740-80e7-b26345317102", 00:09:01.700 "strip_size_kb": 0, 00:09:01.700 "state": "configuring", 00:09:01.700 "raid_level": "raid1", 00:09:01.700 "superblock": true, 00:09:01.700 "num_base_bdevs": 3, 00:09:01.700 "num_base_bdevs_discovered": 2, 00:09:01.700 "num_base_bdevs_operational": 3, 00:09:01.700 "base_bdevs_list": [ 00:09:01.700 { 00:09:01.700 "name": "BaseBdev1", 00:09:01.700 "uuid": "60580bdb-da00-42c4-8d1c-98ec85f7d7e2", 00:09:01.700 "is_configured": true, 00:09:01.700 "data_offset": 2048, 00:09:01.700 "data_size": 63488 00:09:01.700 }, 00:09:01.700 { 00:09:01.700 "name": "BaseBdev2", 00:09:01.700 "uuid": "020f2aff-c083-487e-9888-5ce02898eca2", 00:09:01.701 "is_configured": true, 00:09:01.701 "data_offset": 2048, 00:09:01.701 "data_size": 63488 00:09:01.701 }, 00:09:01.701 { 00:09:01.701 "name": "BaseBdev3", 00:09:01.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.701 "is_configured": false, 00:09:01.701 "data_offset": 0, 00:09:01.701 "data_size": 0 00:09:01.701 } 00:09:01.701 ] 00:09:01.701 }' 00:09:01.701 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.701 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.271 [2024-10-15 01:10:14.833564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:02.271 [2024-10-15 01:10:14.833764] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:02.271 [2024-10-15 01:10:14.833790] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:02.271 BaseBdev3 00:09:02.271 [2024-10-15 01:10:14.834063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:02.271 [2024-10-15 01:10:14.834218] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:02.271 [2024-10-15 01:10:14.834229] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:02.271 [2024-10-15 01:10:14.834340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.271 [ 00:09:02.271 { 00:09:02.271 "name": "BaseBdev3", 00:09:02.271 "aliases": [ 00:09:02.271 "0293a436-7836-4a9d-b2ab-c2ab19ab6aef" 00:09:02.271 ], 00:09:02.271 "product_name": "Malloc disk", 00:09:02.271 "block_size": 512, 00:09:02.271 "num_blocks": 65536, 00:09:02.271 "uuid": "0293a436-7836-4a9d-b2ab-c2ab19ab6aef", 00:09:02.271 "assigned_rate_limits": { 00:09:02.271 "rw_ios_per_sec": 0, 00:09:02.271 "rw_mbytes_per_sec": 0, 00:09:02.271 "r_mbytes_per_sec": 0, 00:09:02.271 "w_mbytes_per_sec": 0 00:09:02.271 }, 00:09:02.271 "claimed": true, 00:09:02.271 "claim_type": "exclusive_write", 00:09:02.271 "zoned": false, 00:09:02.271 "supported_io_types": { 00:09:02.271 "read": true, 00:09:02.271 "write": true, 00:09:02.271 "unmap": true, 00:09:02.271 "flush": true, 00:09:02.271 "reset": true, 00:09:02.271 "nvme_admin": false, 00:09:02.271 "nvme_io": false, 00:09:02.271 "nvme_io_md": false, 00:09:02.271 "write_zeroes": true, 00:09:02.271 "zcopy": true, 00:09:02.271 "get_zone_info": false, 00:09:02.271 "zone_management": false, 00:09:02.271 "zone_append": false, 00:09:02.271 "compare": false, 00:09:02.271 "compare_and_write": false, 00:09:02.271 "abort": true, 00:09:02.271 "seek_hole": false, 00:09:02.271 "seek_data": false, 00:09:02.271 "copy": true, 00:09:02.271 "nvme_iov_md": false 00:09:02.271 }, 00:09:02.271 "memory_domains": [ 00:09:02.271 { 00:09:02.271 "dma_device_id": "system", 00:09:02.271 "dma_device_type": 1 00:09:02.271 }, 00:09:02.271 { 00:09:02.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.271 "dma_device_type": 2 00:09:02.271 } 00:09:02.271 ], 00:09:02.271 "driver_specific": {} 00:09:02.271 } 00:09:02.271 ] 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.271 "name": "Existed_Raid", 00:09:02.271 "uuid": "893c2d78-557f-4740-80e7-b26345317102", 00:09:02.271 "strip_size_kb": 0, 00:09:02.271 "state": "online", 00:09:02.271 "raid_level": "raid1", 00:09:02.271 "superblock": true, 00:09:02.271 "num_base_bdevs": 3, 00:09:02.271 "num_base_bdevs_discovered": 3, 00:09:02.271 "num_base_bdevs_operational": 3, 00:09:02.271 "base_bdevs_list": [ 00:09:02.271 { 00:09:02.271 "name": "BaseBdev1", 00:09:02.271 "uuid": "60580bdb-da00-42c4-8d1c-98ec85f7d7e2", 00:09:02.271 "is_configured": true, 00:09:02.271 "data_offset": 2048, 00:09:02.271 "data_size": 63488 00:09:02.271 }, 00:09:02.271 { 00:09:02.271 "name": "BaseBdev2", 00:09:02.271 "uuid": "020f2aff-c083-487e-9888-5ce02898eca2", 00:09:02.271 "is_configured": true, 00:09:02.271 "data_offset": 2048, 00:09:02.271 "data_size": 63488 00:09:02.271 }, 00:09:02.271 { 00:09:02.271 "name": "BaseBdev3", 00:09:02.271 "uuid": "0293a436-7836-4a9d-b2ab-c2ab19ab6aef", 00:09:02.271 "is_configured": true, 00:09:02.271 "data_offset": 2048, 00:09:02.271 "data_size": 63488 00:09:02.271 } 00:09:02.271 ] 00:09:02.271 }' 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.271 01:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.841 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:02.841 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:02.841 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:02.841 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:02.841 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:02.841 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:02.841 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:02.841 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.842 [2024-10-15 01:10:15.313123] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:02.842 "name": "Existed_Raid", 00:09:02.842 "aliases": [ 00:09:02.842 "893c2d78-557f-4740-80e7-b26345317102" 00:09:02.842 ], 00:09:02.842 "product_name": "Raid Volume", 00:09:02.842 "block_size": 512, 00:09:02.842 "num_blocks": 63488, 00:09:02.842 "uuid": "893c2d78-557f-4740-80e7-b26345317102", 00:09:02.842 "assigned_rate_limits": { 00:09:02.842 "rw_ios_per_sec": 0, 00:09:02.842 "rw_mbytes_per_sec": 0, 00:09:02.842 "r_mbytes_per_sec": 0, 00:09:02.842 "w_mbytes_per_sec": 0 00:09:02.842 }, 00:09:02.842 "claimed": false, 00:09:02.842 "zoned": false, 00:09:02.842 "supported_io_types": { 00:09:02.842 "read": true, 00:09:02.842 "write": true, 00:09:02.842 "unmap": false, 00:09:02.842 "flush": false, 00:09:02.842 "reset": true, 00:09:02.842 "nvme_admin": false, 00:09:02.842 "nvme_io": false, 00:09:02.842 "nvme_io_md": false, 00:09:02.842 "write_zeroes": true, 00:09:02.842 "zcopy": false, 00:09:02.842 "get_zone_info": false, 00:09:02.842 "zone_management": false, 00:09:02.842 "zone_append": false, 00:09:02.842 "compare": false, 00:09:02.842 "compare_and_write": false, 00:09:02.842 "abort": false, 00:09:02.842 "seek_hole": false, 00:09:02.842 "seek_data": false, 00:09:02.842 "copy": false, 00:09:02.842 "nvme_iov_md": false 00:09:02.842 }, 00:09:02.842 "memory_domains": [ 00:09:02.842 { 00:09:02.842 "dma_device_id": "system", 00:09:02.842 "dma_device_type": 1 00:09:02.842 }, 00:09:02.842 { 00:09:02.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.842 "dma_device_type": 2 00:09:02.842 }, 00:09:02.842 { 00:09:02.842 "dma_device_id": "system", 00:09:02.842 "dma_device_type": 1 00:09:02.842 }, 00:09:02.842 { 00:09:02.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.842 "dma_device_type": 2 00:09:02.842 }, 00:09:02.842 { 00:09:02.842 "dma_device_id": "system", 00:09:02.842 "dma_device_type": 1 00:09:02.842 }, 00:09:02.842 { 00:09:02.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.842 "dma_device_type": 2 00:09:02.842 } 00:09:02.842 ], 00:09:02.842 "driver_specific": { 00:09:02.842 "raid": { 00:09:02.842 "uuid": "893c2d78-557f-4740-80e7-b26345317102", 00:09:02.842 "strip_size_kb": 0, 00:09:02.842 "state": "online", 00:09:02.842 "raid_level": "raid1", 00:09:02.842 "superblock": true, 00:09:02.842 "num_base_bdevs": 3, 00:09:02.842 "num_base_bdevs_discovered": 3, 00:09:02.842 "num_base_bdevs_operational": 3, 00:09:02.842 "base_bdevs_list": [ 00:09:02.842 { 00:09:02.842 "name": "BaseBdev1", 00:09:02.842 "uuid": "60580bdb-da00-42c4-8d1c-98ec85f7d7e2", 00:09:02.842 "is_configured": true, 00:09:02.842 "data_offset": 2048, 00:09:02.842 "data_size": 63488 00:09:02.842 }, 00:09:02.842 { 00:09:02.842 "name": "BaseBdev2", 00:09:02.842 "uuid": "020f2aff-c083-487e-9888-5ce02898eca2", 00:09:02.842 "is_configured": true, 00:09:02.842 "data_offset": 2048, 00:09:02.842 "data_size": 63488 00:09:02.842 }, 00:09:02.842 { 00:09:02.842 "name": "BaseBdev3", 00:09:02.842 "uuid": "0293a436-7836-4a9d-b2ab-c2ab19ab6aef", 00:09:02.842 "is_configured": true, 00:09:02.842 "data_offset": 2048, 00:09:02.842 "data_size": 63488 00:09:02.842 } 00:09:02.842 ] 00:09:02.842 } 00:09:02.842 } 00:09:02.842 }' 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:02.842 BaseBdev2 00:09:02.842 BaseBdev3' 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.842 01:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.103 [2024-10-15 01:10:15.592357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.103 "name": "Existed_Raid", 00:09:03.103 "uuid": "893c2d78-557f-4740-80e7-b26345317102", 00:09:03.103 "strip_size_kb": 0, 00:09:03.103 "state": "online", 00:09:03.103 "raid_level": "raid1", 00:09:03.103 "superblock": true, 00:09:03.103 "num_base_bdevs": 3, 00:09:03.103 "num_base_bdevs_discovered": 2, 00:09:03.103 "num_base_bdevs_operational": 2, 00:09:03.103 "base_bdevs_list": [ 00:09:03.103 { 00:09:03.103 "name": null, 00:09:03.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.103 "is_configured": false, 00:09:03.103 "data_offset": 0, 00:09:03.103 "data_size": 63488 00:09:03.103 }, 00:09:03.103 { 00:09:03.103 "name": "BaseBdev2", 00:09:03.103 "uuid": "020f2aff-c083-487e-9888-5ce02898eca2", 00:09:03.103 "is_configured": true, 00:09:03.103 "data_offset": 2048, 00:09:03.103 "data_size": 63488 00:09:03.103 }, 00:09:03.103 { 00:09:03.103 "name": "BaseBdev3", 00:09:03.103 "uuid": "0293a436-7836-4a9d-b2ab-c2ab19ab6aef", 00:09:03.103 "is_configured": true, 00:09:03.103 "data_offset": 2048, 00:09:03.103 "data_size": 63488 00:09:03.103 } 00:09:03.103 ] 00:09:03.103 }' 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.103 01:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.363 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:03.363 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:03.363 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.363 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:03.363 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.363 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.363 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.363 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:03.363 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:03.363 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:03.363 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.363 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.363 [2024-10-15 01:10:16.078904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.624 [2024-10-15 01:10:16.130105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:03.624 [2024-10-15 01:10:16.130213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:03.624 [2024-10-15 01:10:16.141573] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.624 [2024-10-15 01:10:16.141628] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:03.624 [2024-10-15 01:10:16.141650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.624 BaseBdev2 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.624 [ 00:09:03.624 { 00:09:03.624 "name": "BaseBdev2", 00:09:03.624 "aliases": [ 00:09:03.624 "2eff3fc2-1b28-43e9-a3c7-645073903d4a" 00:09:03.624 ], 00:09:03.624 "product_name": "Malloc disk", 00:09:03.624 "block_size": 512, 00:09:03.624 "num_blocks": 65536, 00:09:03.624 "uuid": "2eff3fc2-1b28-43e9-a3c7-645073903d4a", 00:09:03.624 "assigned_rate_limits": { 00:09:03.624 "rw_ios_per_sec": 0, 00:09:03.624 "rw_mbytes_per_sec": 0, 00:09:03.624 "r_mbytes_per_sec": 0, 00:09:03.624 "w_mbytes_per_sec": 0 00:09:03.624 }, 00:09:03.624 "claimed": false, 00:09:03.624 "zoned": false, 00:09:03.624 "supported_io_types": { 00:09:03.624 "read": true, 00:09:03.624 "write": true, 00:09:03.624 "unmap": true, 00:09:03.624 "flush": true, 00:09:03.624 "reset": true, 00:09:03.624 "nvme_admin": false, 00:09:03.624 "nvme_io": false, 00:09:03.624 "nvme_io_md": false, 00:09:03.624 "write_zeroes": true, 00:09:03.624 "zcopy": true, 00:09:03.624 "get_zone_info": false, 00:09:03.624 "zone_management": false, 00:09:03.624 "zone_append": false, 00:09:03.624 "compare": false, 00:09:03.624 "compare_and_write": false, 00:09:03.624 "abort": true, 00:09:03.624 "seek_hole": false, 00:09:03.624 "seek_data": false, 00:09:03.624 "copy": true, 00:09:03.624 "nvme_iov_md": false 00:09:03.624 }, 00:09:03.624 "memory_domains": [ 00:09:03.624 { 00:09:03.624 "dma_device_id": "system", 00:09:03.624 "dma_device_type": 1 00:09:03.624 }, 00:09:03.624 { 00:09:03.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.624 "dma_device_type": 2 00:09:03.624 } 00:09:03.624 ], 00:09:03.624 "driver_specific": {} 00:09:03.624 } 00:09:03.624 ] 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:03.624 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.625 BaseBdev3 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.625 [ 00:09:03.625 { 00:09:03.625 "name": "BaseBdev3", 00:09:03.625 "aliases": [ 00:09:03.625 "5407889f-f17e-459f-8476-802e2c72e766" 00:09:03.625 ], 00:09:03.625 "product_name": "Malloc disk", 00:09:03.625 "block_size": 512, 00:09:03.625 "num_blocks": 65536, 00:09:03.625 "uuid": "5407889f-f17e-459f-8476-802e2c72e766", 00:09:03.625 "assigned_rate_limits": { 00:09:03.625 "rw_ios_per_sec": 0, 00:09:03.625 "rw_mbytes_per_sec": 0, 00:09:03.625 "r_mbytes_per_sec": 0, 00:09:03.625 "w_mbytes_per_sec": 0 00:09:03.625 }, 00:09:03.625 "claimed": false, 00:09:03.625 "zoned": false, 00:09:03.625 "supported_io_types": { 00:09:03.625 "read": true, 00:09:03.625 "write": true, 00:09:03.625 "unmap": true, 00:09:03.625 "flush": true, 00:09:03.625 "reset": true, 00:09:03.625 "nvme_admin": false, 00:09:03.625 "nvme_io": false, 00:09:03.625 "nvme_io_md": false, 00:09:03.625 "write_zeroes": true, 00:09:03.625 "zcopy": true, 00:09:03.625 "get_zone_info": false, 00:09:03.625 "zone_management": false, 00:09:03.625 "zone_append": false, 00:09:03.625 "compare": false, 00:09:03.625 "compare_and_write": false, 00:09:03.625 "abort": true, 00:09:03.625 "seek_hole": false, 00:09:03.625 "seek_data": false, 00:09:03.625 "copy": true, 00:09:03.625 "nvme_iov_md": false 00:09:03.625 }, 00:09:03.625 "memory_domains": [ 00:09:03.625 { 00:09:03.625 "dma_device_id": "system", 00:09:03.625 "dma_device_type": 1 00:09:03.625 }, 00:09:03.625 { 00:09:03.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.625 "dma_device_type": 2 00:09:03.625 } 00:09:03.625 ], 00:09:03.625 "driver_specific": {} 00:09:03.625 } 00:09:03.625 ] 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.625 [2024-10-15 01:10:16.301432] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:03.625 [2024-10-15 01:10:16.301476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:03.625 [2024-10-15 01:10:16.301494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:03.625 [2024-10-15 01:10:16.303289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.625 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.884 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.884 "name": "Existed_Raid", 00:09:03.884 "uuid": "4b91b02b-4388-4085-9aae-cbc5f7c934f8", 00:09:03.884 "strip_size_kb": 0, 00:09:03.884 "state": "configuring", 00:09:03.884 "raid_level": "raid1", 00:09:03.884 "superblock": true, 00:09:03.884 "num_base_bdevs": 3, 00:09:03.884 "num_base_bdevs_discovered": 2, 00:09:03.884 "num_base_bdevs_operational": 3, 00:09:03.884 "base_bdevs_list": [ 00:09:03.884 { 00:09:03.884 "name": "BaseBdev1", 00:09:03.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.884 "is_configured": false, 00:09:03.884 "data_offset": 0, 00:09:03.884 "data_size": 0 00:09:03.884 }, 00:09:03.884 { 00:09:03.884 "name": "BaseBdev2", 00:09:03.884 "uuid": "2eff3fc2-1b28-43e9-a3c7-645073903d4a", 00:09:03.884 "is_configured": true, 00:09:03.884 "data_offset": 2048, 00:09:03.884 "data_size": 63488 00:09:03.884 }, 00:09:03.884 { 00:09:03.884 "name": "BaseBdev3", 00:09:03.884 "uuid": "5407889f-f17e-459f-8476-802e2c72e766", 00:09:03.884 "is_configured": true, 00:09:03.884 "data_offset": 2048, 00:09:03.884 "data_size": 63488 00:09:03.884 } 00:09:03.884 ] 00:09:03.884 }' 00:09:03.884 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.884 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.143 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:04.143 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.143 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.143 [2024-10-15 01:10:16.696747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:04.143 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.143 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:04.143 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.143 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.143 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:04.143 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:04.143 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.144 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.144 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.144 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.144 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.144 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.144 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.144 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.144 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.144 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.144 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.144 "name": "Existed_Raid", 00:09:04.144 "uuid": "4b91b02b-4388-4085-9aae-cbc5f7c934f8", 00:09:04.144 "strip_size_kb": 0, 00:09:04.144 "state": "configuring", 00:09:04.144 "raid_level": "raid1", 00:09:04.144 "superblock": true, 00:09:04.144 "num_base_bdevs": 3, 00:09:04.144 "num_base_bdevs_discovered": 1, 00:09:04.144 "num_base_bdevs_operational": 3, 00:09:04.144 "base_bdevs_list": [ 00:09:04.144 { 00:09:04.144 "name": "BaseBdev1", 00:09:04.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.144 "is_configured": false, 00:09:04.144 "data_offset": 0, 00:09:04.144 "data_size": 0 00:09:04.144 }, 00:09:04.144 { 00:09:04.144 "name": null, 00:09:04.144 "uuid": "2eff3fc2-1b28-43e9-a3c7-645073903d4a", 00:09:04.144 "is_configured": false, 00:09:04.144 "data_offset": 0, 00:09:04.144 "data_size": 63488 00:09:04.144 }, 00:09:04.144 { 00:09:04.144 "name": "BaseBdev3", 00:09:04.144 "uuid": "5407889f-f17e-459f-8476-802e2c72e766", 00:09:04.144 "is_configured": true, 00:09:04.144 "data_offset": 2048, 00:09:04.144 "data_size": 63488 00:09:04.144 } 00:09:04.144 ] 00:09:04.144 }' 00:09:04.144 01:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.144 01:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.403 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.403 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:04.403 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.403 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.403 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.663 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:04.663 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:04.663 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.663 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.663 [2024-10-15 01:10:17.150987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.663 BaseBdev1 00:09:04.663 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.663 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:04.663 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:04.663 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:04.663 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:04.663 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:04.663 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:04.663 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:04.663 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.663 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.663 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.663 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:04.663 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.663 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.663 [ 00:09:04.664 { 00:09:04.664 "name": "BaseBdev1", 00:09:04.664 "aliases": [ 00:09:04.664 "bb7e12f6-ffb9-42a5-b18a-00fa357204c6" 00:09:04.664 ], 00:09:04.664 "product_name": "Malloc disk", 00:09:04.664 "block_size": 512, 00:09:04.664 "num_blocks": 65536, 00:09:04.664 "uuid": "bb7e12f6-ffb9-42a5-b18a-00fa357204c6", 00:09:04.664 "assigned_rate_limits": { 00:09:04.664 "rw_ios_per_sec": 0, 00:09:04.664 "rw_mbytes_per_sec": 0, 00:09:04.664 "r_mbytes_per_sec": 0, 00:09:04.664 "w_mbytes_per_sec": 0 00:09:04.664 }, 00:09:04.664 "claimed": true, 00:09:04.664 "claim_type": "exclusive_write", 00:09:04.664 "zoned": false, 00:09:04.664 "supported_io_types": { 00:09:04.664 "read": true, 00:09:04.664 "write": true, 00:09:04.664 "unmap": true, 00:09:04.664 "flush": true, 00:09:04.664 "reset": true, 00:09:04.664 "nvme_admin": false, 00:09:04.664 "nvme_io": false, 00:09:04.664 "nvme_io_md": false, 00:09:04.664 "write_zeroes": true, 00:09:04.664 "zcopy": true, 00:09:04.664 "get_zone_info": false, 00:09:04.664 "zone_management": false, 00:09:04.664 "zone_append": false, 00:09:04.664 "compare": false, 00:09:04.664 "compare_and_write": false, 00:09:04.664 "abort": true, 00:09:04.664 "seek_hole": false, 00:09:04.664 "seek_data": false, 00:09:04.664 "copy": true, 00:09:04.664 "nvme_iov_md": false 00:09:04.664 }, 00:09:04.664 "memory_domains": [ 00:09:04.664 { 00:09:04.664 "dma_device_id": "system", 00:09:04.664 "dma_device_type": 1 00:09:04.664 }, 00:09:04.664 { 00:09:04.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.664 "dma_device_type": 2 00:09:04.664 } 00:09:04.664 ], 00:09:04.664 "driver_specific": {} 00:09:04.664 } 00:09:04.664 ] 00:09:04.664 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.664 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:04.664 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:04.664 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.664 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.664 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:04.664 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:04.664 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.664 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.664 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.664 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.664 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.664 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.664 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.664 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.664 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.664 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.664 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.664 "name": "Existed_Raid", 00:09:04.664 "uuid": "4b91b02b-4388-4085-9aae-cbc5f7c934f8", 00:09:04.664 "strip_size_kb": 0, 00:09:04.664 "state": "configuring", 00:09:04.664 "raid_level": "raid1", 00:09:04.664 "superblock": true, 00:09:04.664 "num_base_bdevs": 3, 00:09:04.664 "num_base_bdevs_discovered": 2, 00:09:04.664 "num_base_bdevs_operational": 3, 00:09:04.664 "base_bdevs_list": [ 00:09:04.664 { 00:09:04.664 "name": "BaseBdev1", 00:09:04.664 "uuid": "bb7e12f6-ffb9-42a5-b18a-00fa357204c6", 00:09:04.664 "is_configured": true, 00:09:04.664 "data_offset": 2048, 00:09:04.664 "data_size": 63488 00:09:04.664 }, 00:09:04.664 { 00:09:04.664 "name": null, 00:09:04.664 "uuid": "2eff3fc2-1b28-43e9-a3c7-645073903d4a", 00:09:04.664 "is_configured": false, 00:09:04.664 "data_offset": 0, 00:09:04.664 "data_size": 63488 00:09:04.664 }, 00:09:04.664 { 00:09:04.664 "name": "BaseBdev3", 00:09:04.664 "uuid": "5407889f-f17e-459f-8476-802e2c72e766", 00:09:04.664 "is_configured": true, 00:09:04.664 "data_offset": 2048, 00:09:04.664 "data_size": 63488 00:09:04.664 } 00:09:04.664 ] 00:09:04.664 }' 00:09:04.664 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.664 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.926 [2024-10-15 01:10:17.634288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.926 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.204 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.204 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.204 "name": "Existed_Raid", 00:09:05.204 "uuid": "4b91b02b-4388-4085-9aae-cbc5f7c934f8", 00:09:05.204 "strip_size_kb": 0, 00:09:05.204 "state": "configuring", 00:09:05.204 "raid_level": "raid1", 00:09:05.204 "superblock": true, 00:09:05.204 "num_base_bdevs": 3, 00:09:05.204 "num_base_bdevs_discovered": 1, 00:09:05.204 "num_base_bdevs_operational": 3, 00:09:05.204 "base_bdevs_list": [ 00:09:05.204 { 00:09:05.204 "name": "BaseBdev1", 00:09:05.204 "uuid": "bb7e12f6-ffb9-42a5-b18a-00fa357204c6", 00:09:05.204 "is_configured": true, 00:09:05.204 "data_offset": 2048, 00:09:05.204 "data_size": 63488 00:09:05.204 }, 00:09:05.204 { 00:09:05.204 "name": null, 00:09:05.204 "uuid": "2eff3fc2-1b28-43e9-a3c7-645073903d4a", 00:09:05.204 "is_configured": false, 00:09:05.204 "data_offset": 0, 00:09:05.204 "data_size": 63488 00:09:05.204 }, 00:09:05.204 { 00:09:05.204 "name": null, 00:09:05.204 "uuid": "5407889f-f17e-459f-8476-802e2c72e766", 00:09:05.204 "is_configured": false, 00:09:05.204 "data_offset": 0, 00:09:05.204 "data_size": 63488 00:09:05.204 } 00:09:05.204 ] 00:09:05.204 }' 00:09:05.204 01:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.204 01:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.480 [2024-10-15 01:10:18.145444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.480 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.480 "name": "Existed_Raid", 00:09:05.480 "uuid": "4b91b02b-4388-4085-9aae-cbc5f7c934f8", 00:09:05.480 "strip_size_kb": 0, 00:09:05.480 "state": "configuring", 00:09:05.480 "raid_level": "raid1", 00:09:05.480 "superblock": true, 00:09:05.480 "num_base_bdevs": 3, 00:09:05.480 "num_base_bdevs_discovered": 2, 00:09:05.480 "num_base_bdevs_operational": 3, 00:09:05.480 "base_bdevs_list": [ 00:09:05.480 { 00:09:05.480 "name": "BaseBdev1", 00:09:05.480 "uuid": "bb7e12f6-ffb9-42a5-b18a-00fa357204c6", 00:09:05.480 "is_configured": true, 00:09:05.480 "data_offset": 2048, 00:09:05.481 "data_size": 63488 00:09:05.481 }, 00:09:05.481 { 00:09:05.481 "name": null, 00:09:05.481 "uuid": "2eff3fc2-1b28-43e9-a3c7-645073903d4a", 00:09:05.481 "is_configured": false, 00:09:05.481 "data_offset": 0, 00:09:05.481 "data_size": 63488 00:09:05.481 }, 00:09:05.481 { 00:09:05.481 "name": "BaseBdev3", 00:09:05.481 "uuid": "5407889f-f17e-459f-8476-802e2c72e766", 00:09:05.481 "is_configured": true, 00:09:05.481 "data_offset": 2048, 00:09:05.481 "data_size": 63488 00:09:05.481 } 00:09:05.481 ] 00:09:05.481 }' 00:09:05.481 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.481 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.053 [2024-10-15 01:10:18.600682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.053 "name": "Existed_Raid", 00:09:06.053 "uuid": "4b91b02b-4388-4085-9aae-cbc5f7c934f8", 00:09:06.053 "strip_size_kb": 0, 00:09:06.053 "state": "configuring", 00:09:06.053 "raid_level": "raid1", 00:09:06.053 "superblock": true, 00:09:06.053 "num_base_bdevs": 3, 00:09:06.053 "num_base_bdevs_discovered": 1, 00:09:06.053 "num_base_bdevs_operational": 3, 00:09:06.053 "base_bdevs_list": [ 00:09:06.053 { 00:09:06.053 "name": null, 00:09:06.053 "uuid": "bb7e12f6-ffb9-42a5-b18a-00fa357204c6", 00:09:06.053 "is_configured": false, 00:09:06.053 "data_offset": 0, 00:09:06.053 "data_size": 63488 00:09:06.053 }, 00:09:06.053 { 00:09:06.053 "name": null, 00:09:06.053 "uuid": "2eff3fc2-1b28-43e9-a3c7-645073903d4a", 00:09:06.053 "is_configured": false, 00:09:06.053 "data_offset": 0, 00:09:06.053 "data_size": 63488 00:09:06.053 }, 00:09:06.053 { 00:09:06.053 "name": "BaseBdev3", 00:09:06.053 "uuid": "5407889f-f17e-459f-8476-802e2c72e766", 00:09:06.053 "is_configured": true, 00:09:06.053 "data_offset": 2048, 00:09:06.053 "data_size": 63488 00:09:06.053 } 00:09:06.053 ] 00:09:06.053 }' 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.053 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.313 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.313 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.313 01:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.313 01:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:06.313 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.313 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:06.313 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:06.313 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.313 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.313 [2024-10-15 01:10:19.034421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:06.572 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.572 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:06.572 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.572 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.572 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:06.572 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:06.572 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.572 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.572 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.572 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.572 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.572 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.572 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.572 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.572 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.572 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.572 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.572 "name": "Existed_Raid", 00:09:06.572 "uuid": "4b91b02b-4388-4085-9aae-cbc5f7c934f8", 00:09:06.572 "strip_size_kb": 0, 00:09:06.572 "state": "configuring", 00:09:06.572 "raid_level": "raid1", 00:09:06.572 "superblock": true, 00:09:06.572 "num_base_bdevs": 3, 00:09:06.572 "num_base_bdevs_discovered": 2, 00:09:06.572 "num_base_bdevs_operational": 3, 00:09:06.572 "base_bdevs_list": [ 00:09:06.572 { 00:09:06.572 "name": null, 00:09:06.572 "uuid": "bb7e12f6-ffb9-42a5-b18a-00fa357204c6", 00:09:06.572 "is_configured": false, 00:09:06.572 "data_offset": 0, 00:09:06.572 "data_size": 63488 00:09:06.572 }, 00:09:06.572 { 00:09:06.572 "name": "BaseBdev2", 00:09:06.572 "uuid": "2eff3fc2-1b28-43e9-a3c7-645073903d4a", 00:09:06.572 "is_configured": true, 00:09:06.572 "data_offset": 2048, 00:09:06.572 "data_size": 63488 00:09:06.572 }, 00:09:06.572 { 00:09:06.572 "name": "BaseBdev3", 00:09:06.572 "uuid": "5407889f-f17e-459f-8476-802e2c72e766", 00:09:06.572 "is_configured": true, 00:09:06.572 "data_offset": 2048, 00:09:06.573 "data_size": 63488 00:09:06.573 } 00:09:06.573 ] 00:09:06.573 }' 00:09:06.573 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.573 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.832 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.832 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.832 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.832 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:06.832 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.832 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:06.832 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.832 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:06.832 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.832 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.832 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.092 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bb7e12f6-ffb9-42a5-b18a-00fa357204c6 00:09:07.092 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.092 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.092 [2024-10-15 01:10:19.576486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:07.092 [2024-10-15 01:10:19.576660] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:07.092 [2024-10-15 01:10:19.576672] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:07.092 [2024-10-15 01:10:19.576923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:09:07.092 NewBaseBdev 00:09:07.092 [2024-10-15 01:10:19.577051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:07.093 [2024-10-15 01:10:19.577068] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:07.093 [2024-10-15 01:10:19.577168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.093 [ 00:09:07.093 { 00:09:07.093 "name": "NewBaseBdev", 00:09:07.093 "aliases": [ 00:09:07.093 "bb7e12f6-ffb9-42a5-b18a-00fa357204c6" 00:09:07.093 ], 00:09:07.093 "product_name": "Malloc disk", 00:09:07.093 "block_size": 512, 00:09:07.093 "num_blocks": 65536, 00:09:07.093 "uuid": "bb7e12f6-ffb9-42a5-b18a-00fa357204c6", 00:09:07.093 "assigned_rate_limits": { 00:09:07.093 "rw_ios_per_sec": 0, 00:09:07.093 "rw_mbytes_per_sec": 0, 00:09:07.093 "r_mbytes_per_sec": 0, 00:09:07.093 "w_mbytes_per_sec": 0 00:09:07.093 }, 00:09:07.093 "claimed": true, 00:09:07.093 "claim_type": "exclusive_write", 00:09:07.093 "zoned": false, 00:09:07.093 "supported_io_types": { 00:09:07.093 "read": true, 00:09:07.093 "write": true, 00:09:07.093 "unmap": true, 00:09:07.093 "flush": true, 00:09:07.093 "reset": true, 00:09:07.093 "nvme_admin": false, 00:09:07.093 "nvme_io": false, 00:09:07.093 "nvme_io_md": false, 00:09:07.093 "write_zeroes": true, 00:09:07.093 "zcopy": true, 00:09:07.093 "get_zone_info": false, 00:09:07.093 "zone_management": false, 00:09:07.093 "zone_append": false, 00:09:07.093 "compare": false, 00:09:07.093 "compare_and_write": false, 00:09:07.093 "abort": true, 00:09:07.093 "seek_hole": false, 00:09:07.093 "seek_data": false, 00:09:07.093 "copy": true, 00:09:07.093 "nvme_iov_md": false 00:09:07.093 }, 00:09:07.093 "memory_domains": [ 00:09:07.093 { 00:09:07.093 "dma_device_id": "system", 00:09:07.093 "dma_device_type": 1 00:09:07.093 }, 00:09:07.093 { 00:09:07.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.093 "dma_device_type": 2 00:09:07.093 } 00:09:07.093 ], 00:09:07.093 "driver_specific": {} 00:09:07.093 } 00:09:07.093 ] 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.093 "name": "Existed_Raid", 00:09:07.093 "uuid": "4b91b02b-4388-4085-9aae-cbc5f7c934f8", 00:09:07.093 "strip_size_kb": 0, 00:09:07.093 "state": "online", 00:09:07.093 "raid_level": "raid1", 00:09:07.093 "superblock": true, 00:09:07.093 "num_base_bdevs": 3, 00:09:07.093 "num_base_bdevs_discovered": 3, 00:09:07.093 "num_base_bdevs_operational": 3, 00:09:07.093 "base_bdevs_list": [ 00:09:07.093 { 00:09:07.093 "name": "NewBaseBdev", 00:09:07.093 "uuid": "bb7e12f6-ffb9-42a5-b18a-00fa357204c6", 00:09:07.093 "is_configured": true, 00:09:07.093 "data_offset": 2048, 00:09:07.093 "data_size": 63488 00:09:07.093 }, 00:09:07.093 { 00:09:07.093 "name": "BaseBdev2", 00:09:07.093 "uuid": "2eff3fc2-1b28-43e9-a3c7-645073903d4a", 00:09:07.093 "is_configured": true, 00:09:07.093 "data_offset": 2048, 00:09:07.093 "data_size": 63488 00:09:07.093 }, 00:09:07.093 { 00:09:07.093 "name": "BaseBdev3", 00:09:07.093 "uuid": "5407889f-f17e-459f-8476-802e2c72e766", 00:09:07.093 "is_configured": true, 00:09:07.093 "data_offset": 2048, 00:09:07.093 "data_size": 63488 00:09:07.093 } 00:09:07.093 ] 00:09:07.093 }' 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.093 01:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.353 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:07.353 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:07.353 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:07.353 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:07.353 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:07.353 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:07.353 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:07.353 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.353 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.353 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:07.353 [2024-10-15 01:10:20.024074] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.353 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.353 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:07.353 "name": "Existed_Raid", 00:09:07.353 "aliases": [ 00:09:07.353 "4b91b02b-4388-4085-9aae-cbc5f7c934f8" 00:09:07.353 ], 00:09:07.353 "product_name": "Raid Volume", 00:09:07.353 "block_size": 512, 00:09:07.353 "num_blocks": 63488, 00:09:07.353 "uuid": "4b91b02b-4388-4085-9aae-cbc5f7c934f8", 00:09:07.353 "assigned_rate_limits": { 00:09:07.353 "rw_ios_per_sec": 0, 00:09:07.353 "rw_mbytes_per_sec": 0, 00:09:07.353 "r_mbytes_per_sec": 0, 00:09:07.353 "w_mbytes_per_sec": 0 00:09:07.353 }, 00:09:07.353 "claimed": false, 00:09:07.354 "zoned": false, 00:09:07.354 "supported_io_types": { 00:09:07.354 "read": true, 00:09:07.354 "write": true, 00:09:07.354 "unmap": false, 00:09:07.354 "flush": false, 00:09:07.354 "reset": true, 00:09:07.354 "nvme_admin": false, 00:09:07.354 "nvme_io": false, 00:09:07.354 "nvme_io_md": false, 00:09:07.354 "write_zeroes": true, 00:09:07.354 "zcopy": false, 00:09:07.354 "get_zone_info": false, 00:09:07.354 "zone_management": false, 00:09:07.354 "zone_append": false, 00:09:07.354 "compare": false, 00:09:07.354 "compare_and_write": false, 00:09:07.354 "abort": false, 00:09:07.354 "seek_hole": false, 00:09:07.354 "seek_data": false, 00:09:07.354 "copy": false, 00:09:07.354 "nvme_iov_md": false 00:09:07.354 }, 00:09:07.354 "memory_domains": [ 00:09:07.354 { 00:09:07.354 "dma_device_id": "system", 00:09:07.354 "dma_device_type": 1 00:09:07.354 }, 00:09:07.354 { 00:09:07.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.354 "dma_device_type": 2 00:09:07.354 }, 00:09:07.354 { 00:09:07.354 "dma_device_id": "system", 00:09:07.354 "dma_device_type": 1 00:09:07.354 }, 00:09:07.354 { 00:09:07.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.354 "dma_device_type": 2 00:09:07.354 }, 00:09:07.354 { 00:09:07.354 "dma_device_id": "system", 00:09:07.354 "dma_device_type": 1 00:09:07.354 }, 00:09:07.354 { 00:09:07.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.354 "dma_device_type": 2 00:09:07.354 } 00:09:07.354 ], 00:09:07.354 "driver_specific": { 00:09:07.354 "raid": { 00:09:07.354 "uuid": "4b91b02b-4388-4085-9aae-cbc5f7c934f8", 00:09:07.354 "strip_size_kb": 0, 00:09:07.354 "state": "online", 00:09:07.354 "raid_level": "raid1", 00:09:07.354 "superblock": true, 00:09:07.354 "num_base_bdevs": 3, 00:09:07.354 "num_base_bdevs_discovered": 3, 00:09:07.354 "num_base_bdevs_operational": 3, 00:09:07.354 "base_bdevs_list": [ 00:09:07.354 { 00:09:07.354 "name": "NewBaseBdev", 00:09:07.354 "uuid": "bb7e12f6-ffb9-42a5-b18a-00fa357204c6", 00:09:07.354 "is_configured": true, 00:09:07.354 "data_offset": 2048, 00:09:07.354 "data_size": 63488 00:09:07.354 }, 00:09:07.354 { 00:09:07.354 "name": "BaseBdev2", 00:09:07.354 "uuid": "2eff3fc2-1b28-43e9-a3c7-645073903d4a", 00:09:07.354 "is_configured": true, 00:09:07.354 "data_offset": 2048, 00:09:07.354 "data_size": 63488 00:09:07.354 }, 00:09:07.354 { 00:09:07.354 "name": "BaseBdev3", 00:09:07.354 "uuid": "5407889f-f17e-459f-8476-802e2c72e766", 00:09:07.354 "is_configured": true, 00:09:07.354 "data_offset": 2048, 00:09:07.354 "data_size": 63488 00:09:07.354 } 00:09:07.354 ] 00:09:07.354 } 00:09:07.354 } 00:09:07.354 }' 00:09:07.354 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:07.613 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:07.613 BaseBdev2 00:09:07.613 BaseBdev3' 00:09:07.613 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.613 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:07.613 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.613 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:07.613 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.613 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.613 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.613 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.613 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.613 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.613 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.613 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:07.613 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.613 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.613 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.613 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.613 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.613 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.614 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.614 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:07.614 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.614 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.614 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.614 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.614 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.614 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.614 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:07.614 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.614 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.614 [2024-10-15 01:10:20.303291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.614 [2024-10-15 01:10:20.303321] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.614 [2024-10-15 01:10:20.303399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.614 [2024-10-15 01:10:20.303666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.614 [2024-10-15 01:10:20.303683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:07.614 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.614 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 78804 00:09:07.614 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 78804 ']' 00:09:07.614 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 78804 00:09:07.614 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:07.614 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:07.614 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78804 00:09:07.873 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:07.873 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:07.873 killing process with pid 78804 00:09:07.873 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78804' 00:09:07.873 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 78804 00:09:07.873 [2024-10-15 01:10:20.351010] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:07.873 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 78804 00:09:07.873 [2024-10-15 01:10:20.381807] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.873 01:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:07.873 00:09:07.873 real 0m8.558s 00:09:07.873 user 0m14.652s 00:09:07.873 sys 0m1.729s 00:09:07.874 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.874 01:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.874 ************************************ 00:09:07.874 END TEST raid_state_function_test_sb 00:09:07.874 ************************************ 00:09:08.133 01:10:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:08.133 01:10:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:08.133 01:10:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.133 01:10:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:08.133 ************************************ 00:09:08.133 START TEST raid_superblock_test 00:09:08.133 ************************************ 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79402 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79402 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 79402 ']' 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.133 01:10:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.133 [2024-10-15 01:10:20.750221] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:09:08.133 [2024-10-15 01:10:20.750345] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79402 ] 00:09:08.392 [2024-10-15 01:10:20.894988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.392 [2024-10-15 01:10:20.921257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.392 [2024-10-15 01:10:20.964038] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.392 [2024-10-15 01:10:20.964076] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.960 malloc1 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.960 [2024-10-15 01:10:21.594528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:08.960 [2024-10-15 01:10:21.594632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.960 [2024-10-15 01:10:21.594670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:09:08.960 [2024-10-15 01:10:21.594701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.960 [2024-10-15 01:10:21.596839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.960 [2024-10-15 01:10:21.596910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:08.960 pt1 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.960 malloc2 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.960 [2024-10-15 01:10:21.623154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:08.960 [2024-10-15 01:10:21.623274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.960 [2024-10-15 01:10:21.623295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:08.960 [2024-10-15 01:10:21.623306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.960 [2024-10-15 01:10:21.625393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.960 [2024-10-15 01:10:21.625427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:08.960 pt2 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.960 malloc3 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.960 [2024-10-15 01:10:21.651727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:08.960 [2024-10-15 01:10:21.651817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.960 [2024-10-15 01:10:21.651867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:08.960 [2024-10-15 01:10:21.651911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.960 [2024-10-15 01:10:21.653987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.960 [2024-10-15 01:10:21.654057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:08.960 pt3 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.960 [2024-10-15 01:10:21.663771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:08.960 [2024-10-15 01:10:21.665599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:08.960 [2024-10-15 01:10:21.665705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:08.960 [2024-10-15 01:10:21.665870] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:09:08.960 [2024-10-15 01:10:21.665914] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:08.960 [2024-10-15 01:10:21.666197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:08.960 [2024-10-15 01:10:21.666366] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:09:08.960 [2024-10-15 01:10:21.666413] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:09:08.960 [2024-10-15 01:10:21.666562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.960 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:08.961 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:08.961 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.961 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.961 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.961 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.961 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.961 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.961 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.961 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.961 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.220 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.220 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.220 "name": "raid_bdev1", 00:09:09.220 "uuid": "18bf0ffc-b440-4785-bc92-b1f859eca008", 00:09:09.220 "strip_size_kb": 0, 00:09:09.220 "state": "online", 00:09:09.220 "raid_level": "raid1", 00:09:09.220 "superblock": true, 00:09:09.220 "num_base_bdevs": 3, 00:09:09.220 "num_base_bdevs_discovered": 3, 00:09:09.220 "num_base_bdevs_operational": 3, 00:09:09.220 "base_bdevs_list": [ 00:09:09.220 { 00:09:09.220 "name": "pt1", 00:09:09.220 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:09.220 "is_configured": true, 00:09:09.220 "data_offset": 2048, 00:09:09.220 "data_size": 63488 00:09:09.220 }, 00:09:09.220 { 00:09:09.220 "name": "pt2", 00:09:09.220 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:09.220 "is_configured": true, 00:09:09.220 "data_offset": 2048, 00:09:09.220 "data_size": 63488 00:09:09.220 }, 00:09:09.220 { 00:09:09.220 "name": "pt3", 00:09:09.220 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:09.220 "is_configured": true, 00:09:09.220 "data_offset": 2048, 00:09:09.220 "data_size": 63488 00:09:09.220 } 00:09:09.220 ] 00:09:09.220 }' 00:09:09.220 01:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.220 01:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.480 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:09.480 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:09.480 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:09.480 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:09.480 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:09.480 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:09.480 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:09.480 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:09.480 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.480 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.480 [2024-10-15 01:10:22.111287] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.480 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.480 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:09.480 "name": "raid_bdev1", 00:09:09.480 "aliases": [ 00:09:09.480 "18bf0ffc-b440-4785-bc92-b1f859eca008" 00:09:09.480 ], 00:09:09.480 "product_name": "Raid Volume", 00:09:09.480 "block_size": 512, 00:09:09.480 "num_blocks": 63488, 00:09:09.480 "uuid": "18bf0ffc-b440-4785-bc92-b1f859eca008", 00:09:09.480 "assigned_rate_limits": { 00:09:09.480 "rw_ios_per_sec": 0, 00:09:09.480 "rw_mbytes_per_sec": 0, 00:09:09.480 "r_mbytes_per_sec": 0, 00:09:09.480 "w_mbytes_per_sec": 0 00:09:09.480 }, 00:09:09.480 "claimed": false, 00:09:09.480 "zoned": false, 00:09:09.480 "supported_io_types": { 00:09:09.480 "read": true, 00:09:09.480 "write": true, 00:09:09.480 "unmap": false, 00:09:09.480 "flush": false, 00:09:09.480 "reset": true, 00:09:09.480 "nvme_admin": false, 00:09:09.480 "nvme_io": false, 00:09:09.480 "nvme_io_md": false, 00:09:09.480 "write_zeroes": true, 00:09:09.480 "zcopy": false, 00:09:09.480 "get_zone_info": false, 00:09:09.480 "zone_management": false, 00:09:09.480 "zone_append": false, 00:09:09.480 "compare": false, 00:09:09.480 "compare_and_write": false, 00:09:09.480 "abort": false, 00:09:09.480 "seek_hole": false, 00:09:09.480 "seek_data": false, 00:09:09.480 "copy": false, 00:09:09.480 "nvme_iov_md": false 00:09:09.480 }, 00:09:09.481 "memory_domains": [ 00:09:09.481 { 00:09:09.481 "dma_device_id": "system", 00:09:09.481 "dma_device_type": 1 00:09:09.481 }, 00:09:09.481 { 00:09:09.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.481 "dma_device_type": 2 00:09:09.481 }, 00:09:09.481 { 00:09:09.481 "dma_device_id": "system", 00:09:09.481 "dma_device_type": 1 00:09:09.481 }, 00:09:09.481 { 00:09:09.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.481 "dma_device_type": 2 00:09:09.481 }, 00:09:09.481 { 00:09:09.481 "dma_device_id": "system", 00:09:09.481 "dma_device_type": 1 00:09:09.481 }, 00:09:09.481 { 00:09:09.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.481 "dma_device_type": 2 00:09:09.481 } 00:09:09.481 ], 00:09:09.481 "driver_specific": { 00:09:09.481 "raid": { 00:09:09.481 "uuid": "18bf0ffc-b440-4785-bc92-b1f859eca008", 00:09:09.481 "strip_size_kb": 0, 00:09:09.481 "state": "online", 00:09:09.481 "raid_level": "raid1", 00:09:09.481 "superblock": true, 00:09:09.481 "num_base_bdevs": 3, 00:09:09.481 "num_base_bdevs_discovered": 3, 00:09:09.481 "num_base_bdevs_operational": 3, 00:09:09.481 "base_bdevs_list": [ 00:09:09.481 { 00:09:09.481 "name": "pt1", 00:09:09.481 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:09.481 "is_configured": true, 00:09:09.481 "data_offset": 2048, 00:09:09.481 "data_size": 63488 00:09:09.481 }, 00:09:09.481 { 00:09:09.481 "name": "pt2", 00:09:09.481 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:09.481 "is_configured": true, 00:09:09.481 "data_offset": 2048, 00:09:09.481 "data_size": 63488 00:09:09.481 }, 00:09:09.481 { 00:09:09.481 "name": "pt3", 00:09:09.481 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:09.481 "is_configured": true, 00:09:09.481 "data_offset": 2048, 00:09:09.481 "data_size": 63488 00:09:09.481 } 00:09:09.481 ] 00:09:09.481 } 00:09:09.481 } 00:09:09.481 }' 00:09:09.481 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:09.740 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:09.740 pt2 00:09:09.740 pt3' 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.741 [2024-10-15 01:10:22.414674] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=18bf0ffc-b440-4785-bc92-b1f859eca008 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 18bf0ffc-b440-4785-bc92-b1f859eca008 ']' 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.741 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.741 [2024-10-15 01:10:22.462339] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.741 [2024-10-15 01:10:22.462399] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.741 [2024-10-15 01:10:22.462490] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.741 [2024-10-15 01:10:22.462590] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.741 [2024-10-15 01:10:22.462672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:09:10.001 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.001 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.001 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.001 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.001 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:10.001 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.001 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:10.001 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:10.001 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:10.001 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:10.001 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.001 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.001 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.001 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:10.001 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:10.001 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.001 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.001 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.001 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:10.001 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.002 [2024-10-15 01:10:22.610119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:10.002 [2024-10-15 01:10:22.611977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:10.002 [2024-10-15 01:10:22.612016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:10.002 [2024-10-15 01:10:22.612062] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:10.002 [2024-10-15 01:10:22.612113] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:10.002 [2024-10-15 01:10:22.612132] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:10.002 [2024-10-15 01:10:22.612144] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:10.002 [2024-10-15 01:10:22.612153] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:09:10.002 request: 00:09:10.002 { 00:09:10.002 "name": "raid_bdev1", 00:09:10.002 "raid_level": "raid1", 00:09:10.002 "base_bdevs": [ 00:09:10.002 "malloc1", 00:09:10.002 "malloc2", 00:09:10.002 "malloc3" 00:09:10.002 ], 00:09:10.002 "superblock": false, 00:09:10.002 "method": "bdev_raid_create", 00:09:10.002 "req_id": 1 00:09:10.002 } 00:09:10.002 Got JSON-RPC error response 00:09:10.002 response: 00:09:10.002 { 00:09:10.002 "code": -17, 00:09:10.002 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:10.002 } 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.002 [2024-10-15 01:10:22.677975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:10.002 [2024-10-15 01:10:22.678077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.002 [2024-10-15 01:10:22.678125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:10.002 [2024-10-15 01:10:22.678171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.002 [2024-10-15 01:10:22.680338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.002 [2024-10-15 01:10:22.680420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:10.002 [2024-10-15 01:10:22.680523] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:10.002 [2024-10-15 01:10:22.680582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:10.002 pt1 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.002 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.262 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.262 "name": "raid_bdev1", 00:09:10.262 "uuid": "18bf0ffc-b440-4785-bc92-b1f859eca008", 00:09:10.262 "strip_size_kb": 0, 00:09:10.262 "state": "configuring", 00:09:10.262 "raid_level": "raid1", 00:09:10.262 "superblock": true, 00:09:10.262 "num_base_bdevs": 3, 00:09:10.262 "num_base_bdevs_discovered": 1, 00:09:10.262 "num_base_bdevs_operational": 3, 00:09:10.262 "base_bdevs_list": [ 00:09:10.262 { 00:09:10.262 "name": "pt1", 00:09:10.262 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.262 "is_configured": true, 00:09:10.262 "data_offset": 2048, 00:09:10.262 "data_size": 63488 00:09:10.262 }, 00:09:10.262 { 00:09:10.262 "name": null, 00:09:10.262 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.262 "is_configured": false, 00:09:10.262 "data_offset": 2048, 00:09:10.262 "data_size": 63488 00:09:10.262 }, 00:09:10.262 { 00:09:10.262 "name": null, 00:09:10.262 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:10.262 "is_configured": false, 00:09:10.262 "data_offset": 2048, 00:09:10.262 "data_size": 63488 00:09:10.262 } 00:09:10.262 ] 00:09:10.262 }' 00:09:10.262 01:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.262 01:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.522 [2024-10-15 01:10:23.117254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:10.522 [2024-10-15 01:10:23.117355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.522 [2024-10-15 01:10:23.117392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:10.522 [2024-10-15 01:10:23.117423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.522 [2024-10-15 01:10:23.117829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.522 [2024-10-15 01:10:23.117889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:10.522 [2024-10-15 01:10:23.117984] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:10.522 [2024-10-15 01:10:23.118035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:10.522 pt2 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.522 [2024-10-15 01:10:23.129240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.522 "name": "raid_bdev1", 00:09:10.522 "uuid": "18bf0ffc-b440-4785-bc92-b1f859eca008", 00:09:10.522 "strip_size_kb": 0, 00:09:10.522 "state": "configuring", 00:09:10.522 "raid_level": "raid1", 00:09:10.522 "superblock": true, 00:09:10.522 "num_base_bdevs": 3, 00:09:10.522 "num_base_bdevs_discovered": 1, 00:09:10.522 "num_base_bdevs_operational": 3, 00:09:10.522 "base_bdevs_list": [ 00:09:10.522 { 00:09:10.522 "name": "pt1", 00:09:10.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.522 "is_configured": true, 00:09:10.522 "data_offset": 2048, 00:09:10.522 "data_size": 63488 00:09:10.522 }, 00:09:10.522 { 00:09:10.522 "name": null, 00:09:10.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.522 "is_configured": false, 00:09:10.522 "data_offset": 0, 00:09:10.522 "data_size": 63488 00:09:10.522 }, 00:09:10.522 { 00:09:10.522 "name": null, 00:09:10.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:10.522 "is_configured": false, 00:09:10.522 "data_offset": 2048, 00:09:10.522 "data_size": 63488 00:09:10.522 } 00:09:10.522 ] 00:09:10.522 }' 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.522 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.093 [2024-10-15 01:10:23.564474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:11.093 [2024-10-15 01:10:23.564597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.093 [2024-10-15 01:10:23.564638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:11.093 [2024-10-15 01:10:23.564667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.093 [2024-10-15 01:10:23.565079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.093 [2024-10-15 01:10:23.565134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:11.093 [2024-10-15 01:10:23.565244] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:11.093 [2024-10-15 01:10:23.565294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:11.093 pt2 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.093 [2024-10-15 01:10:23.576447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:11.093 [2024-10-15 01:10:23.576526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.093 [2024-10-15 01:10:23.576576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:11.093 [2024-10-15 01:10:23.576603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.093 [2024-10-15 01:10:23.576939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.093 [2024-10-15 01:10:23.576992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:11.093 [2024-10-15 01:10:23.577074] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:11.093 [2024-10-15 01:10:23.577118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:11.093 [2024-10-15 01:10:23.577244] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:11.093 [2024-10-15 01:10:23.577283] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:11.093 [2024-10-15 01:10:23.577531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:11.093 [2024-10-15 01:10:23.577677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:11.093 [2024-10-15 01:10:23.577718] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:09:11.093 [2024-10-15 01:10:23.577856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.093 pt3 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.093 "name": "raid_bdev1", 00:09:11.093 "uuid": "18bf0ffc-b440-4785-bc92-b1f859eca008", 00:09:11.093 "strip_size_kb": 0, 00:09:11.093 "state": "online", 00:09:11.093 "raid_level": "raid1", 00:09:11.093 "superblock": true, 00:09:11.093 "num_base_bdevs": 3, 00:09:11.093 "num_base_bdevs_discovered": 3, 00:09:11.093 "num_base_bdevs_operational": 3, 00:09:11.093 "base_bdevs_list": [ 00:09:11.093 { 00:09:11.093 "name": "pt1", 00:09:11.093 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:11.093 "is_configured": true, 00:09:11.093 "data_offset": 2048, 00:09:11.093 "data_size": 63488 00:09:11.093 }, 00:09:11.093 { 00:09:11.093 "name": "pt2", 00:09:11.093 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:11.093 "is_configured": true, 00:09:11.093 "data_offset": 2048, 00:09:11.093 "data_size": 63488 00:09:11.093 }, 00:09:11.093 { 00:09:11.093 "name": "pt3", 00:09:11.093 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:11.093 "is_configured": true, 00:09:11.093 "data_offset": 2048, 00:09:11.093 "data_size": 63488 00:09:11.093 } 00:09:11.093 ] 00:09:11.093 }' 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.093 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.354 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:11.354 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:11.354 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.354 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.354 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.354 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.354 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:11.354 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.354 01:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.354 01:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.354 [2024-10-15 01:10:24.008025] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.354 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.354 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.354 "name": "raid_bdev1", 00:09:11.354 "aliases": [ 00:09:11.354 "18bf0ffc-b440-4785-bc92-b1f859eca008" 00:09:11.354 ], 00:09:11.354 "product_name": "Raid Volume", 00:09:11.354 "block_size": 512, 00:09:11.354 "num_blocks": 63488, 00:09:11.354 "uuid": "18bf0ffc-b440-4785-bc92-b1f859eca008", 00:09:11.354 "assigned_rate_limits": { 00:09:11.354 "rw_ios_per_sec": 0, 00:09:11.354 "rw_mbytes_per_sec": 0, 00:09:11.354 "r_mbytes_per_sec": 0, 00:09:11.354 "w_mbytes_per_sec": 0 00:09:11.354 }, 00:09:11.354 "claimed": false, 00:09:11.354 "zoned": false, 00:09:11.354 "supported_io_types": { 00:09:11.354 "read": true, 00:09:11.354 "write": true, 00:09:11.354 "unmap": false, 00:09:11.354 "flush": false, 00:09:11.354 "reset": true, 00:09:11.354 "nvme_admin": false, 00:09:11.354 "nvme_io": false, 00:09:11.354 "nvme_io_md": false, 00:09:11.354 "write_zeroes": true, 00:09:11.354 "zcopy": false, 00:09:11.354 "get_zone_info": false, 00:09:11.354 "zone_management": false, 00:09:11.354 "zone_append": false, 00:09:11.354 "compare": false, 00:09:11.354 "compare_and_write": false, 00:09:11.354 "abort": false, 00:09:11.354 "seek_hole": false, 00:09:11.354 "seek_data": false, 00:09:11.354 "copy": false, 00:09:11.354 "nvme_iov_md": false 00:09:11.354 }, 00:09:11.354 "memory_domains": [ 00:09:11.354 { 00:09:11.354 "dma_device_id": "system", 00:09:11.354 "dma_device_type": 1 00:09:11.354 }, 00:09:11.354 { 00:09:11.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.354 "dma_device_type": 2 00:09:11.354 }, 00:09:11.354 { 00:09:11.354 "dma_device_id": "system", 00:09:11.354 "dma_device_type": 1 00:09:11.354 }, 00:09:11.354 { 00:09:11.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.354 "dma_device_type": 2 00:09:11.354 }, 00:09:11.354 { 00:09:11.354 "dma_device_id": "system", 00:09:11.354 "dma_device_type": 1 00:09:11.354 }, 00:09:11.354 { 00:09:11.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.354 "dma_device_type": 2 00:09:11.354 } 00:09:11.354 ], 00:09:11.354 "driver_specific": { 00:09:11.354 "raid": { 00:09:11.354 "uuid": "18bf0ffc-b440-4785-bc92-b1f859eca008", 00:09:11.354 "strip_size_kb": 0, 00:09:11.354 "state": "online", 00:09:11.354 "raid_level": "raid1", 00:09:11.354 "superblock": true, 00:09:11.354 "num_base_bdevs": 3, 00:09:11.354 "num_base_bdevs_discovered": 3, 00:09:11.354 "num_base_bdevs_operational": 3, 00:09:11.354 "base_bdevs_list": [ 00:09:11.354 { 00:09:11.354 "name": "pt1", 00:09:11.354 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:11.354 "is_configured": true, 00:09:11.354 "data_offset": 2048, 00:09:11.354 "data_size": 63488 00:09:11.354 }, 00:09:11.354 { 00:09:11.354 "name": "pt2", 00:09:11.354 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:11.354 "is_configured": true, 00:09:11.354 "data_offset": 2048, 00:09:11.354 "data_size": 63488 00:09:11.354 }, 00:09:11.354 { 00:09:11.354 "name": "pt3", 00:09:11.354 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:11.354 "is_configured": true, 00:09:11.354 "data_offset": 2048, 00:09:11.354 "data_size": 63488 00:09:11.354 } 00:09:11.354 ] 00:09:11.354 } 00:09:11.354 } 00:09:11.354 }' 00:09:11.354 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.354 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:11.354 pt2 00:09:11.354 pt3' 00:09:11.354 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.614 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.614 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.614 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:11.614 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.614 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.614 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.614 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.614 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.614 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.614 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.614 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:11.614 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.614 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.614 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.614 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.614 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.614 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.614 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.614 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:11.615 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.615 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.615 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.615 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.615 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.615 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.615 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:11.615 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:11.615 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.615 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.615 [2024-10-15 01:10:24.287637] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.615 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.615 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 18bf0ffc-b440-4785-bc92-b1f859eca008 '!=' 18bf0ffc-b440-4785-bc92-b1f859eca008 ']' 00:09:11.615 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:11.615 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.615 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:11.615 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:11.615 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.615 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.615 [2024-10-15 01:10:24.331339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:11.615 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.615 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:11.615 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.875 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.875 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.875 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.875 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:11.875 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.875 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.875 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.875 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.875 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.875 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.875 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.875 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.875 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.875 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.875 "name": "raid_bdev1", 00:09:11.875 "uuid": "18bf0ffc-b440-4785-bc92-b1f859eca008", 00:09:11.875 "strip_size_kb": 0, 00:09:11.875 "state": "online", 00:09:11.875 "raid_level": "raid1", 00:09:11.875 "superblock": true, 00:09:11.875 "num_base_bdevs": 3, 00:09:11.875 "num_base_bdevs_discovered": 2, 00:09:11.875 "num_base_bdevs_operational": 2, 00:09:11.875 "base_bdevs_list": [ 00:09:11.875 { 00:09:11.875 "name": null, 00:09:11.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.875 "is_configured": false, 00:09:11.875 "data_offset": 0, 00:09:11.875 "data_size": 63488 00:09:11.875 }, 00:09:11.875 { 00:09:11.875 "name": "pt2", 00:09:11.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:11.875 "is_configured": true, 00:09:11.875 "data_offset": 2048, 00:09:11.875 "data_size": 63488 00:09:11.875 }, 00:09:11.875 { 00:09:11.875 "name": "pt3", 00:09:11.875 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:11.875 "is_configured": true, 00:09:11.875 "data_offset": 2048, 00:09:11.875 "data_size": 63488 00:09:11.875 } 00:09:11.875 ] 00:09:11.875 }' 00:09:11.875 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.875 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.135 [2024-10-15 01:10:24.718595] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:12.135 [2024-10-15 01:10:24.718680] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.135 [2024-10-15 01:10:24.718762] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.135 [2024-10-15 01:10:24.718825] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.135 [2024-10-15 01:10:24.718835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.135 [2024-10-15 01:10:24.802421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:12.135 [2024-10-15 01:10:24.802468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.135 [2024-10-15 01:10:24.802485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:12.135 [2024-10-15 01:10:24.802494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.135 [2024-10-15 01:10:24.804681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.135 [2024-10-15 01:10:24.804717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:12.135 [2024-10-15 01:10:24.804788] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:12.135 [2024-10-15 01:10:24.804820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:12.135 pt2 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.135 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.395 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.395 "name": "raid_bdev1", 00:09:12.395 "uuid": "18bf0ffc-b440-4785-bc92-b1f859eca008", 00:09:12.395 "strip_size_kb": 0, 00:09:12.395 "state": "configuring", 00:09:12.395 "raid_level": "raid1", 00:09:12.395 "superblock": true, 00:09:12.395 "num_base_bdevs": 3, 00:09:12.395 "num_base_bdevs_discovered": 1, 00:09:12.395 "num_base_bdevs_operational": 2, 00:09:12.395 "base_bdevs_list": [ 00:09:12.395 { 00:09:12.395 "name": null, 00:09:12.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.395 "is_configured": false, 00:09:12.395 "data_offset": 2048, 00:09:12.395 "data_size": 63488 00:09:12.395 }, 00:09:12.395 { 00:09:12.395 "name": "pt2", 00:09:12.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:12.395 "is_configured": true, 00:09:12.395 "data_offset": 2048, 00:09:12.395 "data_size": 63488 00:09:12.395 }, 00:09:12.395 { 00:09:12.395 "name": null, 00:09:12.395 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:12.395 "is_configured": false, 00:09:12.395 "data_offset": 2048, 00:09:12.395 "data_size": 63488 00:09:12.395 } 00:09:12.395 ] 00:09:12.395 }' 00:09:12.395 01:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.395 01:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.655 [2024-10-15 01:10:25.265654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:12.655 [2024-10-15 01:10:25.265755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.655 [2024-10-15 01:10:25.265794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:12.655 [2024-10-15 01:10:25.265844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.655 [2024-10-15 01:10:25.266264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.655 [2024-10-15 01:10:25.266326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:12.655 [2024-10-15 01:10:25.266434] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:12.655 [2024-10-15 01:10:25.266491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:12.655 [2024-10-15 01:10:25.266626] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:12.655 [2024-10-15 01:10:25.266662] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:12.655 [2024-10-15 01:10:25.266918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:12.655 [2024-10-15 01:10:25.267081] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:12.655 [2024-10-15 01:10:25.267126] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:12.655 [2024-10-15 01:10:25.267278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.655 pt3 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.655 "name": "raid_bdev1", 00:09:12.655 "uuid": "18bf0ffc-b440-4785-bc92-b1f859eca008", 00:09:12.655 "strip_size_kb": 0, 00:09:12.655 "state": "online", 00:09:12.655 "raid_level": "raid1", 00:09:12.655 "superblock": true, 00:09:12.655 "num_base_bdevs": 3, 00:09:12.655 "num_base_bdevs_discovered": 2, 00:09:12.655 "num_base_bdevs_operational": 2, 00:09:12.655 "base_bdevs_list": [ 00:09:12.655 { 00:09:12.655 "name": null, 00:09:12.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.655 "is_configured": false, 00:09:12.655 "data_offset": 2048, 00:09:12.655 "data_size": 63488 00:09:12.655 }, 00:09:12.655 { 00:09:12.655 "name": "pt2", 00:09:12.655 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:12.655 "is_configured": true, 00:09:12.655 "data_offset": 2048, 00:09:12.655 "data_size": 63488 00:09:12.655 }, 00:09:12.655 { 00:09:12.655 "name": "pt3", 00:09:12.655 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:12.655 "is_configured": true, 00:09:12.655 "data_offset": 2048, 00:09:12.655 "data_size": 63488 00:09:12.655 } 00:09:12.655 ] 00:09:12.655 }' 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.655 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.224 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.225 [2024-10-15 01:10:25.684950] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:13.225 [2024-10-15 01:10:25.684992] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:13.225 [2024-10-15 01:10:25.685072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.225 [2024-10-15 01:10:25.685131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.225 [2024-10-15 01:10:25.685142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.225 [2024-10-15 01:10:25.756799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:13.225 [2024-10-15 01:10:25.756892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.225 [2024-10-15 01:10:25.756911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:13.225 [2024-10-15 01:10:25.756922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.225 [2024-10-15 01:10:25.759109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.225 [2024-10-15 01:10:25.759146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:13.225 [2024-10-15 01:10:25.759231] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:13.225 [2024-10-15 01:10:25.759293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:13.225 [2024-10-15 01:10:25.759410] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:13.225 [2024-10-15 01:10:25.759425] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:13.225 [2024-10-15 01:10:25.759440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:09:13.225 [2024-10-15 01:10:25.759489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:13.225 pt1 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.225 "name": "raid_bdev1", 00:09:13.225 "uuid": "18bf0ffc-b440-4785-bc92-b1f859eca008", 00:09:13.225 "strip_size_kb": 0, 00:09:13.225 "state": "configuring", 00:09:13.225 "raid_level": "raid1", 00:09:13.225 "superblock": true, 00:09:13.225 "num_base_bdevs": 3, 00:09:13.225 "num_base_bdevs_discovered": 1, 00:09:13.225 "num_base_bdevs_operational": 2, 00:09:13.225 "base_bdevs_list": [ 00:09:13.225 { 00:09:13.225 "name": null, 00:09:13.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.225 "is_configured": false, 00:09:13.225 "data_offset": 2048, 00:09:13.225 "data_size": 63488 00:09:13.225 }, 00:09:13.225 { 00:09:13.225 "name": "pt2", 00:09:13.225 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:13.225 "is_configured": true, 00:09:13.225 "data_offset": 2048, 00:09:13.225 "data_size": 63488 00:09:13.225 }, 00:09:13.225 { 00:09:13.225 "name": null, 00:09:13.225 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:13.225 "is_configured": false, 00:09:13.225 "data_offset": 2048, 00:09:13.225 "data_size": 63488 00:09:13.225 } 00:09:13.225 ] 00:09:13.225 }' 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.225 01:10:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.490 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:13.490 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:13.490 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.490 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.757 [2024-10-15 01:10:26.255941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:13.757 [2024-10-15 01:10:26.256006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.757 [2024-10-15 01:10:26.256025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:09:13.757 [2024-10-15 01:10:26.256036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.757 [2024-10-15 01:10:26.256458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.757 [2024-10-15 01:10:26.256486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:13.757 [2024-10-15 01:10:26.256560] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:13.757 [2024-10-15 01:10:26.256588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:13.757 [2024-10-15 01:10:26.256680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:09:13.757 [2024-10-15 01:10:26.256692] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:13.757 [2024-10-15 01:10:26.256912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:09:13.757 [2024-10-15 01:10:26.257041] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:09:13.757 [2024-10-15 01:10:26.257050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:09:13.757 [2024-10-15 01:10:26.257155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.757 pt3 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.757 "name": "raid_bdev1", 00:09:13.757 "uuid": "18bf0ffc-b440-4785-bc92-b1f859eca008", 00:09:13.757 "strip_size_kb": 0, 00:09:13.757 "state": "online", 00:09:13.757 "raid_level": "raid1", 00:09:13.757 "superblock": true, 00:09:13.757 "num_base_bdevs": 3, 00:09:13.757 "num_base_bdevs_discovered": 2, 00:09:13.757 "num_base_bdevs_operational": 2, 00:09:13.757 "base_bdevs_list": [ 00:09:13.757 { 00:09:13.757 "name": null, 00:09:13.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.757 "is_configured": false, 00:09:13.757 "data_offset": 2048, 00:09:13.757 "data_size": 63488 00:09:13.757 }, 00:09:13.757 { 00:09:13.757 "name": "pt2", 00:09:13.757 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:13.757 "is_configured": true, 00:09:13.757 "data_offset": 2048, 00:09:13.757 "data_size": 63488 00:09:13.757 }, 00:09:13.757 { 00:09:13.757 "name": "pt3", 00:09:13.757 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:13.757 "is_configured": true, 00:09:13.757 "data_offset": 2048, 00:09:13.757 "data_size": 63488 00:09:13.757 } 00:09:13.757 ] 00:09:13.757 }' 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.757 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.016 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:14.016 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:14.016 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.016 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.016 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.016 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:14.016 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:14.016 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:14.016 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.016 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.275 [2024-10-15 01:10:26.743436] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.275 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.275 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 18bf0ffc-b440-4785-bc92-b1f859eca008 '!=' 18bf0ffc-b440-4785-bc92-b1f859eca008 ']' 00:09:14.275 01:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79402 00:09:14.275 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 79402 ']' 00:09:14.275 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 79402 00:09:14.275 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:14.275 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:14.275 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79402 00:09:14.275 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:14.275 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:14.275 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79402' 00:09:14.275 killing process with pid 79402 00:09:14.275 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 79402 00:09:14.275 [2024-10-15 01:10:26.823691] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:14.275 [2024-10-15 01:10:26.823819] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:14.275 01:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 79402 00:09:14.275 [2024-10-15 01:10:26.823913] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:14.275 [2024-10-15 01:10:26.823925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:09:14.275 [2024-10-15 01:10:26.856564] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:14.534 01:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:14.534 00:09:14.534 real 0m6.404s 00:09:14.534 user 0m10.837s 00:09:14.534 sys 0m1.291s 00:09:14.534 ************************************ 00:09:14.534 END TEST raid_superblock_test 00:09:14.534 ************************************ 00:09:14.534 01:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.534 01:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.534 01:10:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:14.534 01:10:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:14.534 01:10:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.534 01:10:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:14.534 ************************************ 00:09:14.534 START TEST raid_read_error_test 00:09:14.534 ************************************ 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RqfMo9QrkM 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79831 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79831 00:09:14.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 79831 ']' 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:14.534 01:10:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.534 [2024-10-15 01:10:27.236581] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:09:14.535 [2024-10-15 01:10:27.236691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79831 ] 00:09:14.794 [2024-10-15 01:10:27.366422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.794 [2024-10-15 01:10:27.391915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.794 [2024-10-15 01:10:27.435144] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.794 [2024-10-15 01:10:27.435171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.362 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:15.363 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:15.363 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:15.363 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:15.363 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.363 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.621 BaseBdev1_malloc 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.621 true 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.621 [2024-10-15 01:10:28.102261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:15.621 [2024-10-15 01:10:28.102309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.621 [2024-10-15 01:10:28.102328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:15.621 [2024-10-15 01:10:28.102337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.621 [2024-10-15 01:10:28.104467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.621 [2024-10-15 01:10:28.104507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:15.621 BaseBdev1 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.621 BaseBdev2_malloc 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.621 true 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.621 [2024-10-15 01:10:28.142887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:15.621 [2024-10-15 01:10:28.142932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.621 [2024-10-15 01:10:28.142949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:15.621 [2024-10-15 01:10:28.142965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.621 [2024-10-15 01:10:28.145092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.621 [2024-10-15 01:10:28.145125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:15.621 BaseBdev2 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:15.621 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.622 BaseBdev3_malloc 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.622 true 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.622 [2024-10-15 01:10:28.183521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:15.622 [2024-10-15 01:10:28.183590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.622 [2024-10-15 01:10:28.183611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:15.622 [2024-10-15 01:10:28.183619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.622 [2024-10-15 01:10:28.185659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.622 [2024-10-15 01:10:28.185748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:15.622 BaseBdev3 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.622 [2024-10-15 01:10:28.195600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.622 [2024-10-15 01:10:28.197482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.622 [2024-10-15 01:10:28.197549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:15.622 [2024-10-15 01:10:28.197733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:15.622 [2024-10-15 01:10:28.197746] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:15.622 [2024-10-15 01:10:28.197982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:15.622 [2024-10-15 01:10:28.198121] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:15.622 [2024-10-15 01:10:28.198131] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:15.622 [2024-10-15 01:10:28.198276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.622 "name": "raid_bdev1", 00:09:15.622 "uuid": "e5ff51bb-7906-4d59-81bb-9f7f4740ed14", 00:09:15.622 "strip_size_kb": 0, 00:09:15.622 "state": "online", 00:09:15.622 "raid_level": "raid1", 00:09:15.622 "superblock": true, 00:09:15.622 "num_base_bdevs": 3, 00:09:15.622 "num_base_bdevs_discovered": 3, 00:09:15.622 "num_base_bdevs_operational": 3, 00:09:15.622 "base_bdevs_list": [ 00:09:15.622 { 00:09:15.622 "name": "BaseBdev1", 00:09:15.622 "uuid": "cb6fd63e-cd73-5988-b584-c9c670486a9a", 00:09:15.622 "is_configured": true, 00:09:15.622 "data_offset": 2048, 00:09:15.622 "data_size": 63488 00:09:15.622 }, 00:09:15.622 { 00:09:15.622 "name": "BaseBdev2", 00:09:15.622 "uuid": "66f1cab4-c32f-59bb-a5c7-aaf02224f9f4", 00:09:15.622 "is_configured": true, 00:09:15.622 "data_offset": 2048, 00:09:15.622 "data_size": 63488 00:09:15.622 }, 00:09:15.622 { 00:09:15.622 "name": "BaseBdev3", 00:09:15.622 "uuid": "f7c5c748-10f9-59ee-9932-30ad56da8637", 00:09:15.622 "is_configured": true, 00:09:15.622 "data_offset": 2048, 00:09:15.622 "data_size": 63488 00:09:15.622 } 00:09:15.622 ] 00:09:15.622 }' 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.622 01:10:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.190 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:16.190 01:10:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:16.190 [2024-10-15 01:10:28.739024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.128 01:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.128 "name": "raid_bdev1", 00:09:17.128 "uuid": "e5ff51bb-7906-4d59-81bb-9f7f4740ed14", 00:09:17.128 "strip_size_kb": 0, 00:09:17.128 "state": "online", 00:09:17.128 "raid_level": "raid1", 00:09:17.128 "superblock": true, 00:09:17.128 "num_base_bdevs": 3, 00:09:17.128 "num_base_bdevs_discovered": 3, 00:09:17.128 "num_base_bdevs_operational": 3, 00:09:17.128 "base_bdevs_list": [ 00:09:17.128 { 00:09:17.128 "name": "BaseBdev1", 00:09:17.128 "uuid": "cb6fd63e-cd73-5988-b584-c9c670486a9a", 00:09:17.128 "is_configured": true, 00:09:17.128 "data_offset": 2048, 00:09:17.128 "data_size": 63488 00:09:17.128 }, 00:09:17.128 { 00:09:17.128 "name": "BaseBdev2", 00:09:17.128 "uuid": "66f1cab4-c32f-59bb-a5c7-aaf02224f9f4", 00:09:17.128 "is_configured": true, 00:09:17.128 "data_offset": 2048, 00:09:17.128 "data_size": 63488 00:09:17.128 }, 00:09:17.128 { 00:09:17.128 "name": "BaseBdev3", 00:09:17.128 "uuid": "f7c5c748-10f9-59ee-9932-30ad56da8637", 00:09:17.128 "is_configured": true, 00:09:17.129 "data_offset": 2048, 00:09:17.129 "data_size": 63488 00:09:17.129 } 00:09:17.129 ] 00:09:17.129 }' 00:09:17.129 01:10:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.129 01:10:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.696 01:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:17.696 01:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.696 01:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.696 [2024-10-15 01:10:30.145843] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:17.696 [2024-10-15 01:10:30.145941] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.696 [2024-10-15 01:10:30.148508] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.696 [2024-10-15 01:10:30.148593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.696 [2024-10-15 01:10:30.148726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.696 [2024-10-15 01:10:30.148814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:17.696 { 00:09:17.696 "results": [ 00:09:17.696 { 00:09:17.696 "job": "raid_bdev1", 00:09:17.696 "core_mask": "0x1", 00:09:17.696 "workload": "randrw", 00:09:17.696 "percentage": 50, 00:09:17.696 "status": "finished", 00:09:17.696 "queue_depth": 1, 00:09:17.696 "io_size": 131072, 00:09:17.696 "runtime": 1.407801, 00:09:17.696 "iops": 14631.329285886286, 00:09:17.696 "mibps": 1828.9161607357858, 00:09:17.696 "io_failed": 0, 00:09:17.696 "io_timeout": 0, 00:09:17.696 "avg_latency_us": 65.82275652318812, 00:09:17.696 "min_latency_us": 21.687336244541484, 00:09:17.696 "max_latency_us": 1416.6078602620087 00:09:17.696 } 00:09:17.696 ], 00:09:17.696 "core_count": 1 00:09:17.696 } 00:09:17.696 01:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.696 01:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79831 00:09:17.696 01:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 79831 ']' 00:09:17.696 01:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 79831 00:09:17.696 01:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:17.696 01:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:17.696 01:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79831 00:09:17.696 01:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:17.696 01:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:17.696 killing process with pid 79831 00:09:17.696 01:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79831' 00:09:17.696 01:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 79831 00:09:17.696 [2024-10-15 01:10:30.195038] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:17.696 01:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 79831 00:09:17.696 [2024-10-15 01:10:30.220291] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:17.956 01:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:17.956 01:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RqfMo9QrkM 00:09:17.956 01:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:17.956 01:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:17.956 01:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:17.956 01:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:17.956 01:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:17.956 01:10:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:17.956 00:09:17.956 real 0m3.294s 00:09:17.956 user 0m4.194s 00:09:17.956 sys 0m0.517s 00:09:17.956 ************************************ 00:09:17.956 END TEST raid_read_error_test 00:09:17.956 ************************************ 00:09:17.956 01:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.956 01:10:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.956 01:10:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:17.956 01:10:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:17.956 01:10:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.956 01:10:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.956 ************************************ 00:09:17.956 START TEST raid_write_error_test 00:09:17.956 ************************************ 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZV11cDHamw 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79966 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79966 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 79966 ']' 00:09:17.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.956 01:10:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:17.957 01:10:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.957 01:10:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:17.957 01:10:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.957 [2024-10-15 01:10:30.606348] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:09:17.957 [2024-10-15 01:10:30.606535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79966 ] 00:09:18.216 [2024-10-15 01:10:30.749758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.216 [2024-10-15 01:10:30.775891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.216 [2024-10-15 01:10:30.818239] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.216 [2024-10-15 01:10:30.818270] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.786 BaseBdev1_malloc 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.786 true 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.786 [2024-10-15 01:10:31.456434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:18.786 [2024-10-15 01:10:31.456484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.786 [2024-10-15 01:10:31.456504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:18.786 [2024-10-15 01:10:31.456513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.786 [2024-10-15 01:10:31.458603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.786 [2024-10-15 01:10:31.458638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:18.786 BaseBdev1 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.786 BaseBdev2_malloc 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.786 true 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.786 [2024-10-15 01:10:31.496957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:18.786 [2024-10-15 01:10:31.497005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.786 [2024-10-15 01:10:31.497024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:18.786 [2024-10-15 01:10:31.497040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.786 [2024-10-15 01:10:31.499089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.786 [2024-10-15 01:10:31.499164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:18.786 BaseBdev2 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.786 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.046 BaseBdev3_malloc 00:09:19.046 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.046 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.047 true 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.047 [2024-10-15 01:10:31.537461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:19.047 [2024-10-15 01:10:31.537509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.047 [2024-10-15 01:10:31.537530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:19.047 [2024-10-15 01:10:31.537539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.047 [2024-10-15 01:10:31.539524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.047 [2024-10-15 01:10:31.539615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:19.047 BaseBdev3 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.047 [2024-10-15 01:10:31.549516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:19.047 [2024-10-15 01:10:31.551269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.047 [2024-10-15 01:10:31.551338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.047 [2024-10-15 01:10:31.551510] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:19.047 [2024-10-15 01:10:31.551523] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:19.047 [2024-10-15 01:10:31.551764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:19.047 [2024-10-15 01:10:31.551906] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:19.047 [2024-10-15 01:10:31.551920] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:19.047 [2024-10-15 01:10:31.552040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.047 "name": "raid_bdev1", 00:09:19.047 "uuid": "0d12f851-6245-4b1a-8d1f-8e44ce77c3ea", 00:09:19.047 "strip_size_kb": 0, 00:09:19.047 "state": "online", 00:09:19.047 "raid_level": "raid1", 00:09:19.047 "superblock": true, 00:09:19.047 "num_base_bdevs": 3, 00:09:19.047 "num_base_bdevs_discovered": 3, 00:09:19.047 "num_base_bdevs_operational": 3, 00:09:19.047 "base_bdevs_list": [ 00:09:19.047 { 00:09:19.047 "name": "BaseBdev1", 00:09:19.047 "uuid": "dbcd68aa-7995-54fd-8214-ab9f90e88675", 00:09:19.047 "is_configured": true, 00:09:19.047 "data_offset": 2048, 00:09:19.047 "data_size": 63488 00:09:19.047 }, 00:09:19.047 { 00:09:19.047 "name": "BaseBdev2", 00:09:19.047 "uuid": "e87baa65-70eb-5ea4-924e-7c2165a1434a", 00:09:19.047 "is_configured": true, 00:09:19.047 "data_offset": 2048, 00:09:19.047 "data_size": 63488 00:09:19.047 }, 00:09:19.047 { 00:09:19.047 "name": "BaseBdev3", 00:09:19.047 "uuid": "b9cbfcb1-29ee-52c2-b073-7455cc57440f", 00:09:19.047 "is_configured": true, 00:09:19.047 "data_offset": 2048, 00:09:19.047 "data_size": 63488 00:09:19.047 } 00:09:19.047 ] 00:09:19.047 }' 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.047 01:10:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.307 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:19.307 01:10:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:19.565 [2024-10-15 01:10:32.085043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:20.503 01:10:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:20.503 01:10:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.503 01:10:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.503 [2024-10-15 01:10:32.999944] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:20.503 [2024-10-15 01:10:33.000005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:20.503 [2024-10-15 01:10:33.000231] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002600 00:09:20.503 01:10:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.503 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:20.503 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:20.503 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:20.503 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:20.503 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:20.503 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.503 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.503 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.503 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.503 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.503 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.503 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.503 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.503 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.503 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.503 01:10:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.503 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.503 01:10:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.503 01:10:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.503 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.503 "name": "raid_bdev1", 00:09:20.503 "uuid": "0d12f851-6245-4b1a-8d1f-8e44ce77c3ea", 00:09:20.503 "strip_size_kb": 0, 00:09:20.503 "state": "online", 00:09:20.503 "raid_level": "raid1", 00:09:20.503 "superblock": true, 00:09:20.503 "num_base_bdevs": 3, 00:09:20.503 "num_base_bdevs_discovered": 2, 00:09:20.503 "num_base_bdevs_operational": 2, 00:09:20.503 "base_bdevs_list": [ 00:09:20.503 { 00:09:20.503 "name": null, 00:09:20.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.504 "is_configured": false, 00:09:20.504 "data_offset": 0, 00:09:20.504 "data_size": 63488 00:09:20.504 }, 00:09:20.504 { 00:09:20.504 "name": "BaseBdev2", 00:09:20.504 "uuid": "e87baa65-70eb-5ea4-924e-7c2165a1434a", 00:09:20.504 "is_configured": true, 00:09:20.504 "data_offset": 2048, 00:09:20.504 "data_size": 63488 00:09:20.504 }, 00:09:20.504 { 00:09:20.504 "name": "BaseBdev3", 00:09:20.504 "uuid": "b9cbfcb1-29ee-52c2-b073-7455cc57440f", 00:09:20.504 "is_configured": true, 00:09:20.504 "data_offset": 2048, 00:09:20.504 "data_size": 63488 00:09:20.504 } 00:09:20.504 ] 00:09:20.504 }' 00:09:20.504 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.504 01:10:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.764 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:20.764 01:10:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.764 01:10:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.764 [2024-10-15 01:10:33.477992] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.764 [2024-10-15 01:10:33.478077] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.764 [2024-10-15 01:10:33.480559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.764 [2024-10-15 01:10:33.480644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.764 [2024-10-15 01:10:33.480744] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.764 [2024-10-15 01:10:33.480789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:20.764 01:10:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.764 { 00:09:20.764 "results": [ 00:09:20.764 { 00:09:20.764 "job": "raid_bdev1", 00:09:20.764 "core_mask": "0x1", 00:09:20.764 "workload": "randrw", 00:09:20.764 "percentage": 50, 00:09:20.764 "status": "finished", 00:09:20.764 "queue_depth": 1, 00:09:20.764 "io_size": 131072, 00:09:20.764 "runtime": 1.393879, 00:09:20.764 "iops": 16362.969813018202, 00:09:20.764 "mibps": 2045.3712266272753, 00:09:20.764 "io_failed": 0, 00:09:20.764 "io_timeout": 0, 00:09:20.764 "avg_latency_us": 58.57159075418263, 00:09:20.764 "min_latency_us": 21.687336244541484, 00:09:20.764 "max_latency_us": 1345.0620087336245 00:09:20.764 } 00:09:20.764 ], 00:09:20.764 "core_count": 1 00:09:20.764 } 00:09:20.764 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79966 00:09:20.764 01:10:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 79966 ']' 00:09:20.764 01:10:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 79966 00:09:20.764 01:10:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:21.024 01:10:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:21.024 01:10:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79966 00:09:21.024 01:10:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:21.024 killing process with pid 79966 00:09:21.024 01:10:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:21.024 01:10:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79966' 00:09:21.024 01:10:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 79966 00:09:21.024 [2024-10-15 01:10:33.521756] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.024 01:10:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 79966 00:09:21.024 [2024-10-15 01:10:33.546169] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.024 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZV11cDHamw 00:09:21.024 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:21.024 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:21.284 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:21.284 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:21.284 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:21.284 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:21.284 ************************************ 00:09:21.284 END TEST raid_write_error_test 00:09:21.284 ************************************ 00:09:21.284 01:10:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:21.284 00:09:21.284 real 0m3.246s 00:09:21.284 user 0m4.180s 00:09:21.284 sys 0m0.483s 00:09:21.284 01:10:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.284 01:10:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.284 01:10:33 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:21.284 01:10:33 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:21.284 01:10:33 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:21.284 01:10:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:21.284 01:10:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.284 01:10:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.284 ************************************ 00:09:21.284 START TEST raid_state_function_test 00:09:21.284 ************************************ 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:21.284 Process raid pid: 80093 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80093 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80093' 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80093 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80093 ']' 00:09:21.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.284 01:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.284 [2024-10-15 01:10:33.921470] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:09:21.284 [2024-10-15 01:10:33.921582] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.544 [2024-10-15 01:10:34.047232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.544 [2024-10-15 01:10:34.072976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.544 [2024-10-15 01:10:34.115578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.544 [2024-10-15 01:10:34.115697] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.129 [2024-10-15 01:10:34.737379] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.129 [2024-10-15 01:10:34.737432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.129 [2024-10-15 01:10:34.737457] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.129 [2024-10-15 01:10:34.737466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.129 [2024-10-15 01:10:34.737472] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:22.129 [2024-10-15 01:10:34.737484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:22.129 [2024-10-15 01:10:34.737490] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:22.129 [2024-10-15 01:10:34.737498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.129 "name": "Existed_Raid", 00:09:22.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.129 "strip_size_kb": 64, 00:09:22.129 "state": "configuring", 00:09:22.129 "raid_level": "raid0", 00:09:22.129 "superblock": false, 00:09:22.129 "num_base_bdevs": 4, 00:09:22.129 "num_base_bdevs_discovered": 0, 00:09:22.129 "num_base_bdevs_operational": 4, 00:09:22.129 "base_bdevs_list": [ 00:09:22.129 { 00:09:22.129 "name": "BaseBdev1", 00:09:22.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.129 "is_configured": false, 00:09:22.129 "data_offset": 0, 00:09:22.129 "data_size": 0 00:09:22.129 }, 00:09:22.129 { 00:09:22.129 "name": "BaseBdev2", 00:09:22.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.129 "is_configured": false, 00:09:22.129 "data_offset": 0, 00:09:22.129 "data_size": 0 00:09:22.129 }, 00:09:22.129 { 00:09:22.129 "name": "BaseBdev3", 00:09:22.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.129 "is_configured": false, 00:09:22.129 "data_offset": 0, 00:09:22.129 "data_size": 0 00:09:22.129 }, 00:09:22.129 { 00:09:22.129 "name": "BaseBdev4", 00:09:22.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.129 "is_configured": false, 00:09:22.129 "data_offset": 0, 00:09:22.129 "data_size": 0 00:09:22.129 } 00:09:22.129 ] 00:09:22.129 }' 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.129 01:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.699 [2024-10-15 01:10:35.212443] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:22.699 [2024-10-15 01:10:35.212533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.699 [2024-10-15 01:10:35.224448] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.699 [2024-10-15 01:10:35.224533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.699 [2024-10-15 01:10:35.224562] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.699 [2024-10-15 01:10:35.224584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.699 [2024-10-15 01:10:35.224602] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:22.699 [2024-10-15 01:10:35.224638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:22.699 [2024-10-15 01:10:35.224662] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:22.699 [2024-10-15 01:10:35.224684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.699 [2024-10-15 01:10:35.245329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.699 BaseBdev1 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.699 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.699 [ 00:09:22.699 { 00:09:22.699 "name": "BaseBdev1", 00:09:22.699 "aliases": [ 00:09:22.699 "84a3265b-c3b5-46ac-9a28-92354dc8fcb0" 00:09:22.699 ], 00:09:22.699 "product_name": "Malloc disk", 00:09:22.699 "block_size": 512, 00:09:22.699 "num_blocks": 65536, 00:09:22.699 "uuid": "84a3265b-c3b5-46ac-9a28-92354dc8fcb0", 00:09:22.699 "assigned_rate_limits": { 00:09:22.699 "rw_ios_per_sec": 0, 00:09:22.699 "rw_mbytes_per_sec": 0, 00:09:22.700 "r_mbytes_per_sec": 0, 00:09:22.700 "w_mbytes_per_sec": 0 00:09:22.700 }, 00:09:22.700 "claimed": true, 00:09:22.700 "claim_type": "exclusive_write", 00:09:22.700 "zoned": false, 00:09:22.700 "supported_io_types": { 00:09:22.700 "read": true, 00:09:22.700 "write": true, 00:09:22.700 "unmap": true, 00:09:22.700 "flush": true, 00:09:22.700 "reset": true, 00:09:22.700 "nvme_admin": false, 00:09:22.700 "nvme_io": false, 00:09:22.700 "nvme_io_md": false, 00:09:22.700 "write_zeroes": true, 00:09:22.700 "zcopy": true, 00:09:22.700 "get_zone_info": false, 00:09:22.700 "zone_management": false, 00:09:22.700 "zone_append": false, 00:09:22.700 "compare": false, 00:09:22.700 "compare_and_write": false, 00:09:22.700 "abort": true, 00:09:22.700 "seek_hole": false, 00:09:22.700 "seek_data": false, 00:09:22.700 "copy": true, 00:09:22.700 "nvme_iov_md": false 00:09:22.700 }, 00:09:22.700 "memory_domains": [ 00:09:22.700 { 00:09:22.700 "dma_device_id": "system", 00:09:22.700 "dma_device_type": 1 00:09:22.700 }, 00:09:22.700 { 00:09:22.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.700 "dma_device_type": 2 00:09:22.700 } 00:09:22.700 ], 00:09:22.700 "driver_specific": {} 00:09:22.700 } 00:09:22.700 ] 00:09:22.700 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.700 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:22.700 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:22.700 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.700 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.700 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.700 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.700 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:22.700 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.700 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.700 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.700 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.700 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.700 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.700 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.700 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.700 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.700 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.700 "name": "Existed_Raid", 00:09:22.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.700 "strip_size_kb": 64, 00:09:22.700 "state": "configuring", 00:09:22.700 "raid_level": "raid0", 00:09:22.700 "superblock": false, 00:09:22.700 "num_base_bdevs": 4, 00:09:22.700 "num_base_bdevs_discovered": 1, 00:09:22.700 "num_base_bdevs_operational": 4, 00:09:22.700 "base_bdevs_list": [ 00:09:22.700 { 00:09:22.700 "name": "BaseBdev1", 00:09:22.700 "uuid": "84a3265b-c3b5-46ac-9a28-92354dc8fcb0", 00:09:22.700 "is_configured": true, 00:09:22.700 "data_offset": 0, 00:09:22.700 "data_size": 65536 00:09:22.700 }, 00:09:22.700 { 00:09:22.700 "name": "BaseBdev2", 00:09:22.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.700 "is_configured": false, 00:09:22.700 "data_offset": 0, 00:09:22.700 "data_size": 0 00:09:22.700 }, 00:09:22.700 { 00:09:22.700 "name": "BaseBdev3", 00:09:22.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.700 "is_configured": false, 00:09:22.700 "data_offset": 0, 00:09:22.700 "data_size": 0 00:09:22.700 }, 00:09:22.700 { 00:09:22.700 "name": "BaseBdev4", 00:09:22.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.700 "is_configured": false, 00:09:22.700 "data_offset": 0, 00:09:22.700 "data_size": 0 00:09:22.700 } 00:09:22.700 ] 00:09:22.700 }' 00:09:22.700 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.700 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.270 [2024-10-15 01:10:35.764499] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:23.270 [2024-10-15 01:10:35.764547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.270 [2024-10-15 01:10:35.772551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.270 [2024-10-15 01:10:35.774399] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.270 [2024-10-15 01:10:35.774471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.270 [2024-10-15 01:10:35.774498] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:23.270 [2024-10-15 01:10:35.774520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:23.270 [2024-10-15 01:10:35.774538] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:23.270 [2024-10-15 01:10:35.774557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.270 "name": "Existed_Raid", 00:09:23.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.270 "strip_size_kb": 64, 00:09:23.270 "state": "configuring", 00:09:23.270 "raid_level": "raid0", 00:09:23.270 "superblock": false, 00:09:23.270 "num_base_bdevs": 4, 00:09:23.270 "num_base_bdevs_discovered": 1, 00:09:23.270 "num_base_bdevs_operational": 4, 00:09:23.270 "base_bdevs_list": [ 00:09:23.270 { 00:09:23.270 "name": "BaseBdev1", 00:09:23.270 "uuid": "84a3265b-c3b5-46ac-9a28-92354dc8fcb0", 00:09:23.270 "is_configured": true, 00:09:23.270 "data_offset": 0, 00:09:23.270 "data_size": 65536 00:09:23.270 }, 00:09:23.270 { 00:09:23.270 "name": "BaseBdev2", 00:09:23.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.270 "is_configured": false, 00:09:23.270 "data_offset": 0, 00:09:23.270 "data_size": 0 00:09:23.270 }, 00:09:23.270 { 00:09:23.270 "name": "BaseBdev3", 00:09:23.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.270 "is_configured": false, 00:09:23.270 "data_offset": 0, 00:09:23.270 "data_size": 0 00:09:23.270 }, 00:09:23.270 { 00:09:23.270 "name": "BaseBdev4", 00:09:23.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.270 "is_configured": false, 00:09:23.270 "data_offset": 0, 00:09:23.270 "data_size": 0 00:09:23.270 } 00:09:23.270 ] 00:09:23.270 }' 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.270 01:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.530 [2024-10-15 01:10:36.158802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.530 BaseBdev2 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.530 [ 00:09:23.530 { 00:09:23.530 "name": "BaseBdev2", 00:09:23.530 "aliases": [ 00:09:23.530 "5984ebbd-7118-423b-b257-a1fbe08498d9" 00:09:23.530 ], 00:09:23.530 "product_name": "Malloc disk", 00:09:23.530 "block_size": 512, 00:09:23.530 "num_blocks": 65536, 00:09:23.530 "uuid": "5984ebbd-7118-423b-b257-a1fbe08498d9", 00:09:23.530 "assigned_rate_limits": { 00:09:23.530 "rw_ios_per_sec": 0, 00:09:23.530 "rw_mbytes_per_sec": 0, 00:09:23.530 "r_mbytes_per_sec": 0, 00:09:23.530 "w_mbytes_per_sec": 0 00:09:23.530 }, 00:09:23.530 "claimed": true, 00:09:23.530 "claim_type": "exclusive_write", 00:09:23.530 "zoned": false, 00:09:23.530 "supported_io_types": { 00:09:23.530 "read": true, 00:09:23.530 "write": true, 00:09:23.530 "unmap": true, 00:09:23.530 "flush": true, 00:09:23.530 "reset": true, 00:09:23.530 "nvme_admin": false, 00:09:23.530 "nvme_io": false, 00:09:23.530 "nvme_io_md": false, 00:09:23.530 "write_zeroes": true, 00:09:23.530 "zcopy": true, 00:09:23.530 "get_zone_info": false, 00:09:23.530 "zone_management": false, 00:09:23.530 "zone_append": false, 00:09:23.530 "compare": false, 00:09:23.530 "compare_and_write": false, 00:09:23.530 "abort": true, 00:09:23.530 "seek_hole": false, 00:09:23.530 "seek_data": false, 00:09:23.530 "copy": true, 00:09:23.530 "nvme_iov_md": false 00:09:23.530 }, 00:09:23.530 "memory_domains": [ 00:09:23.530 { 00:09:23.530 "dma_device_id": "system", 00:09:23.530 "dma_device_type": 1 00:09:23.530 }, 00:09:23.530 { 00:09:23.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.530 "dma_device_type": 2 00:09:23.530 } 00:09:23.530 ], 00:09:23.530 "driver_specific": {} 00:09:23.530 } 00:09:23.530 ] 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:23.530 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.531 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.531 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.531 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.531 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:23.531 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.531 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.531 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.531 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.531 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.531 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.531 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.531 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.531 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.531 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.531 "name": "Existed_Raid", 00:09:23.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.531 "strip_size_kb": 64, 00:09:23.531 "state": "configuring", 00:09:23.531 "raid_level": "raid0", 00:09:23.531 "superblock": false, 00:09:23.531 "num_base_bdevs": 4, 00:09:23.531 "num_base_bdevs_discovered": 2, 00:09:23.531 "num_base_bdevs_operational": 4, 00:09:23.531 "base_bdevs_list": [ 00:09:23.531 { 00:09:23.531 "name": "BaseBdev1", 00:09:23.531 "uuid": "84a3265b-c3b5-46ac-9a28-92354dc8fcb0", 00:09:23.531 "is_configured": true, 00:09:23.531 "data_offset": 0, 00:09:23.531 "data_size": 65536 00:09:23.531 }, 00:09:23.531 { 00:09:23.531 "name": "BaseBdev2", 00:09:23.531 "uuid": "5984ebbd-7118-423b-b257-a1fbe08498d9", 00:09:23.531 "is_configured": true, 00:09:23.531 "data_offset": 0, 00:09:23.531 "data_size": 65536 00:09:23.531 }, 00:09:23.531 { 00:09:23.531 "name": "BaseBdev3", 00:09:23.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.531 "is_configured": false, 00:09:23.531 "data_offset": 0, 00:09:23.531 "data_size": 0 00:09:23.531 }, 00:09:23.531 { 00:09:23.531 "name": "BaseBdev4", 00:09:23.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.531 "is_configured": false, 00:09:23.531 "data_offset": 0, 00:09:23.531 "data_size": 0 00:09:23.531 } 00:09:23.531 ] 00:09:23.531 }' 00:09:23.531 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.531 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.101 [2024-10-15 01:10:36.616835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:24.101 BaseBdev3 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.101 [ 00:09:24.101 { 00:09:24.101 "name": "BaseBdev3", 00:09:24.101 "aliases": [ 00:09:24.101 "7a297847-e057-4623-8225-3ecc65903f8b" 00:09:24.101 ], 00:09:24.101 "product_name": "Malloc disk", 00:09:24.101 "block_size": 512, 00:09:24.101 "num_blocks": 65536, 00:09:24.101 "uuid": "7a297847-e057-4623-8225-3ecc65903f8b", 00:09:24.101 "assigned_rate_limits": { 00:09:24.101 "rw_ios_per_sec": 0, 00:09:24.101 "rw_mbytes_per_sec": 0, 00:09:24.101 "r_mbytes_per_sec": 0, 00:09:24.101 "w_mbytes_per_sec": 0 00:09:24.101 }, 00:09:24.101 "claimed": true, 00:09:24.101 "claim_type": "exclusive_write", 00:09:24.101 "zoned": false, 00:09:24.101 "supported_io_types": { 00:09:24.101 "read": true, 00:09:24.101 "write": true, 00:09:24.101 "unmap": true, 00:09:24.101 "flush": true, 00:09:24.101 "reset": true, 00:09:24.101 "nvme_admin": false, 00:09:24.101 "nvme_io": false, 00:09:24.101 "nvme_io_md": false, 00:09:24.101 "write_zeroes": true, 00:09:24.101 "zcopy": true, 00:09:24.101 "get_zone_info": false, 00:09:24.101 "zone_management": false, 00:09:24.101 "zone_append": false, 00:09:24.101 "compare": false, 00:09:24.101 "compare_and_write": false, 00:09:24.101 "abort": true, 00:09:24.101 "seek_hole": false, 00:09:24.101 "seek_data": false, 00:09:24.101 "copy": true, 00:09:24.101 "nvme_iov_md": false 00:09:24.101 }, 00:09:24.101 "memory_domains": [ 00:09:24.101 { 00:09:24.101 "dma_device_id": "system", 00:09:24.101 "dma_device_type": 1 00:09:24.101 }, 00:09:24.101 { 00:09:24.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.101 "dma_device_type": 2 00:09:24.101 } 00:09:24.101 ], 00:09:24.101 "driver_specific": {} 00:09:24.101 } 00:09:24.101 ] 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.101 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.101 "name": "Existed_Raid", 00:09:24.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.101 "strip_size_kb": 64, 00:09:24.101 "state": "configuring", 00:09:24.101 "raid_level": "raid0", 00:09:24.101 "superblock": false, 00:09:24.101 "num_base_bdevs": 4, 00:09:24.102 "num_base_bdevs_discovered": 3, 00:09:24.102 "num_base_bdevs_operational": 4, 00:09:24.102 "base_bdevs_list": [ 00:09:24.102 { 00:09:24.102 "name": "BaseBdev1", 00:09:24.102 "uuid": "84a3265b-c3b5-46ac-9a28-92354dc8fcb0", 00:09:24.102 "is_configured": true, 00:09:24.102 "data_offset": 0, 00:09:24.102 "data_size": 65536 00:09:24.102 }, 00:09:24.102 { 00:09:24.102 "name": "BaseBdev2", 00:09:24.102 "uuid": "5984ebbd-7118-423b-b257-a1fbe08498d9", 00:09:24.102 "is_configured": true, 00:09:24.102 "data_offset": 0, 00:09:24.102 "data_size": 65536 00:09:24.102 }, 00:09:24.102 { 00:09:24.102 "name": "BaseBdev3", 00:09:24.102 "uuid": "7a297847-e057-4623-8225-3ecc65903f8b", 00:09:24.102 "is_configured": true, 00:09:24.102 "data_offset": 0, 00:09:24.102 "data_size": 65536 00:09:24.102 }, 00:09:24.102 { 00:09:24.102 "name": "BaseBdev4", 00:09:24.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.102 "is_configured": false, 00:09:24.102 "data_offset": 0, 00:09:24.102 "data_size": 0 00:09:24.102 } 00:09:24.102 ] 00:09:24.102 }' 00:09:24.102 01:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.102 01:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.671 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:24.671 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.671 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.671 [2024-10-15 01:10:37.119189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:24.671 [2024-10-15 01:10:37.119309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:24.671 [2024-10-15 01:10:37.119349] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:24.671 [2024-10-15 01:10:37.119710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:24.671 [2024-10-15 01:10:37.119902] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:24.671 [2024-10-15 01:10:37.119957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:24.671 [2024-10-15 01:10:37.120198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.671 BaseBdev4 00:09:24.671 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.671 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:24.671 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:24.671 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:24.671 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:24.671 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:24.671 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:24.671 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:24.671 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.671 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.671 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.671 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:24.671 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.671 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.671 [ 00:09:24.671 { 00:09:24.671 "name": "BaseBdev4", 00:09:24.671 "aliases": [ 00:09:24.671 "dfcbe8f3-243a-4fec-9bb2-5c68d1c1c392" 00:09:24.671 ], 00:09:24.671 "product_name": "Malloc disk", 00:09:24.671 "block_size": 512, 00:09:24.671 "num_blocks": 65536, 00:09:24.671 "uuid": "dfcbe8f3-243a-4fec-9bb2-5c68d1c1c392", 00:09:24.671 "assigned_rate_limits": { 00:09:24.671 "rw_ios_per_sec": 0, 00:09:24.671 "rw_mbytes_per_sec": 0, 00:09:24.671 "r_mbytes_per_sec": 0, 00:09:24.671 "w_mbytes_per_sec": 0 00:09:24.671 }, 00:09:24.671 "claimed": true, 00:09:24.671 "claim_type": "exclusive_write", 00:09:24.671 "zoned": false, 00:09:24.671 "supported_io_types": { 00:09:24.671 "read": true, 00:09:24.671 "write": true, 00:09:24.671 "unmap": true, 00:09:24.671 "flush": true, 00:09:24.671 "reset": true, 00:09:24.671 "nvme_admin": false, 00:09:24.671 "nvme_io": false, 00:09:24.671 "nvme_io_md": false, 00:09:24.671 "write_zeroes": true, 00:09:24.671 "zcopy": true, 00:09:24.671 "get_zone_info": false, 00:09:24.671 "zone_management": false, 00:09:24.671 "zone_append": false, 00:09:24.671 "compare": false, 00:09:24.671 "compare_and_write": false, 00:09:24.671 "abort": true, 00:09:24.671 "seek_hole": false, 00:09:24.671 "seek_data": false, 00:09:24.671 "copy": true, 00:09:24.671 "nvme_iov_md": false 00:09:24.671 }, 00:09:24.671 "memory_domains": [ 00:09:24.671 { 00:09:24.671 "dma_device_id": "system", 00:09:24.671 "dma_device_type": 1 00:09:24.671 }, 00:09:24.671 { 00:09:24.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.671 "dma_device_type": 2 00:09:24.671 } 00:09:24.671 ], 00:09:24.671 "driver_specific": {} 00:09:24.671 } 00:09:24.671 ] 00:09:24.671 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.671 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:24.671 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:24.672 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.672 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:24.672 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.672 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.672 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.672 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.672 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:24.672 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.672 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.672 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.672 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.672 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.672 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.672 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.672 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.672 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.672 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.672 "name": "Existed_Raid", 00:09:24.672 "uuid": "06e89f22-4d39-4f03-9b56-84c1b231afaf", 00:09:24.672 "strip_size_kb": 64, 00:09:24.672 "state": "online", 00:09:24.672 "raid_level": "raid0", 00:09:24.672 "superblock": false, 00:09:24.672 "num_base_bdevs": 4, 00:09:24.672 "num_base_bdevs_discovered": 4, 00:09:24.672 "num_base_bdevs_operational": 4, 00:09:24.672 "base_bdevs_list": [ 00:09:24.672 { 00:09:24.672 "name": "BaseBdev1", 00:09:24.672 "uuid": "84a3265b-c3b5-46ac-9a28-92354dc8fcb0", 00:09:24.672 "is_configured": true, 00:09:24.672 "data_offset": 0, 00:09:24.672 "data_size": 65536 00:09:24.672 }, 00:09:24.672 { 00:09:24.672 "name": "BaseBdev2", 00:09:24.672 "uuid": "5984ebbd-7118-423b-b257-a1fbe08498d9", 00:09:24.672 "is_configured": true, 00:09:24.672 "data_offset": 0, 00:09:24.672 "data_size": 65536 00:09:24.672 }, 00:09:24.672 { 00:09:24.672 "name": "BaseBdev3", 00:09:24.672 "uuid": "7a297847-e057-4623-8225-3ecc65903f8b", 00:09:24.672 "is_configured": true, 00:09:24.672 "data_offset": 0, 00:09:24.672 "data_size": 65536 00:09:24.672 }, 00:09:24.672 { 00:09:24.672 "name": "BaseBdev4", 00:09:24.672 "uuid": "dfcbe8f3-243a-4fec-9bb2-5c68d1c1c392", 00:09:24.672 "is_configured": true, 00:09:24.672 "data_offset": 0, 00:09:24.672 "data_size": 65536 00:09:24.672 } 00:09:24.672 ] 00:09:24.672 }' 00:09:24.672 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.672 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.932 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:24.932 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:24.932 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:24.932 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:24.932 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:24.932 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:24.932 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:24.932 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:24.932 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.932 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.192 [2024-10-15 01:10:37.658685] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.192 "name": "Existed_Raid", 00:09:25.192 "aliases": [ 00:09:25.192 "06e89f22-4d39-4f03-9b56-84c1b231afaf" 00:09:25.192 ], 00:09:25.192 "product_name": "Raid Volume", 00:09:25.192 "block_size": 512, 00:09:25.192 "num_blocks": 262144, 00:09:25.192 "uuid": "06e89f22-4d39-4f03-9b56-84c1b231afaf", 00:09:25.192 "assigned_rate_limits": { 00:09:25.192 "rw_ios_per_sec": 0, 00:09:25.192 "rw_mbytes_per_sec": 0, 00:09:25.192 "r_mbytes_per_sec": 0, 00:09:25.192 "w_mbytes_per_sec": 0 00:09:25.192 }, 00:09:25.192 "claimed": false, 00:09:25.192 "zoned": false, 00:09:25.192 "supported_io_types": { 00:09:25.192 "read": true, 00:09:25.192 "write": true, 00:09:25.192 "unmap": true, 00:09:25.192 "flush": true, 00:09:25.192 "reset": true, 00:09:25.192 "nvme_admin": false, 00:09:25.192 "nvme_io": false, 00:09:25.192 "nvme_io_md": false, 00:09:25.192 "write_zeroes": true, 00:09:25.192 "zcopy": false, 00:09:25.192 "get_zone_info": false, 00:09:25.192 "zone_management": false, 00:09:25.192 "zone_append": false, 00:09:25.192 "compare": false, 00:09:25.192 "compare_and_write": false, 00:09:25.192 "abort": false, 00:09:25.192 "seek_hole": false, 00:09:25.192 "seek_data": false, 00:09:25.192 "copy": false, 00:09:25.192 "nvme_iov_md": false 00:09:25.192 }, 00:09:25.192 "memory_domains": [ 00:09:25.192 { 00:09:25.192 "dma_device_id": "system", 00:09:25.192 "dma_device_type": 1 00:09:25.192 }, 00:09:25.192 { 00:09:25.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.192 "dma_device_type": 2 00:09:25.192 }, 00:09:25.192 { 00:09:25.192 "dma_device_id": "system", 00:09:25.192 "dma_device_type": 1 00:09:25.192 }, 00:09:25.192 { 00:09:25.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.192 "dma_device_type": 2 00:09:25.192 }, 00:09:25.192 { 00:09:25.192 "dma_device_id": "system", 00:09:25.192 "dma_device_type": 1 00:09:25.192 }, 00:09:25.192 { 00:09:25.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.192 "dma_device_type": 2 00:09:25.192 }, 00:09:25.192 { 00:09:25.192 "dma_device_id": "system", 00:09:25.192 "dma_device_type": 1 00:09:25.192 }, 00:09:25.192 { 00:09:25.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.192 "dma_device_type": 2 00:09:25.192 } 00:09:25.192 ], 00:09:25.192 "driver_specific": { 00:09:25.192 "raid": { 00:09:25.192 "uuid": "06e89f22-4d39-4f03-9b56-84c1b231afaf", 00:09:25.192 "strip_size_kb": 64, 00:09:25.192 "state": "online", 00:09:25.192 "raid_level": "raid0", 00:09:25.192 "superblock": false, 00:09:25.192 "num_base_bdevs": 4, 00:09:25.192 "num_base_bdevs_discovered": 4, 00:09:25.192 "num_base_bdevs_operational": 4, 00:09:25.192 "base_bdevs_list": [ 00:09:25.192 { 00:09:25.192 "name": "BaseBdev1", 00:09:25.192 "uuid": "84a3265b-c3b5-46ac-9a28-92354dc8fcb0", 00:09:25.192 "is_configured": true, 00:09:25.192 "data_offset": 0, 00:09:25.192 "data_size": 65536 00:09:25.192 }, 00:09:25.192 { 00:09:25.192 "name": "BaseBdev2", 00:09:25.192 "uuid": "5984ebbd-7118-423b-b257-a1fbe08498d9", 00:09:25.192 "is_configured": true, 00:09:25.192 "data_offset": 0, 00:09:25.192 "data_size": 65536 00:09:25.192 }, 00:09:25.192 { 00:09:25.192 "name": "BaseBdev3", 00:09:25.192 "uuid": "7a297847-e057-4623-8225-3ecc65903f8b", 00:09:25.192 "is_configured": true, 00:09:25.192 "data_offset": 0, 00:09:25.192 "data_size": 65536 00:09:25.192 }, 00:09:25.192 { 00:09:25.192 "name": "BaseBdev4", 00:09:25.192 "uuid": "dfcbe8f3-243a-4fec-9bb2-5c68d1c1c392", 00:09:25.192 "is_configured": true, 00:09:25.192 "data_offset": 0, 00:09:25.192 "data_size": 65536 00:09:25.192 } 00:09:25.192 ] 00:09:25.192 } 00:09:25.192 } 00:09:25.192 }' 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:25.192 BaseBdev2 00:09:25.192 BaseBdev3 00:09:25.192 BaseBdev4' 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.192 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.193 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.193 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.193 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.193 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:25.193 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.193 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.193 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.452 [2024-10-15 01:10:37.953856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:25.452 [2024-10-15 01:10:37.953925] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.452 [2024-10-15 01:10:37.953992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.452 01:10:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.452 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.452 "name": "Existed_Raid", 00:09:25.452 "uuid": "06e89f22-4d39-4f03-9b56-84c1b231afaf", 00:09:25.452 "strip_size_kb": 64, 00:09:25.452 "state": "offline", 00:09:25.452 "raid_level": "raid0", 00:09:25.452 "superblock": false, 00:09:25.452 "num_base_bdevs": 4, 00:09:25.452 "num_base_bdevs_discovered": 3, 00:09:25.452 "num_base_bdevs_operational": 3, 00:09:25.452 "base_bdevs_list": [ 00:09:25.452 { 00:09:25.452 "name": null, 00:09:25.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.452 "is_configured": false, 00:09:25.452 "data_offset": 0, 00:09:25.452 "data_size": 65536 00:09:25.452 }, 00:09:25.452 { 00:09:25.452 "name": "BaseBdev2", 00:09:25.452 "uuid": "5984ebbd-7118-423b-b257-a1fbe08498d9", 00:09:25.452 "is_configured": true, 00:09:25.452 "data_offset": 0, 00:09:25.452 "data_size": 65536 00:09:25.452 }, 00:09:25.452 { 00:09:25.452 "name": "BaseBdev3", 00:09:25.452 "uuid": "7a297847-e057-4623-8225-3ecc65903f8b", 00:09:25.452 "is_configured": true, 00:09:25.452 "data_offset": 0, 00:09:25.452 "data_size": 65536 00:09:25.452 }, 00:09:25.452 { 00:09:25.452 "name": "BaseBdev4", 00:09:25.452 "uuid": "dfcbe8f3-243a-4fec-9bb2-5c68d1c1c392", 00:09:25.452 "is_configured": true, 00:09:25.452 "data_offset": 0, 00:09:25.452 "data_size": 65536 00:09:25.452 } 00:09:25.452 ] 00:09:25.452 }' 00:09:25.452 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.452 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.712 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:25.712 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:25.712 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:25.712 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.712 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.712 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.712 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.712 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:25.712 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:25.712 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:25.712 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.712 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.973 [2024-10-15 01:10:38.440431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.973 [2024-10-15 01:10:38.511548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.973 [2024-10-15 01:10:38.578544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:25.973 [2024-10-15 01:10:38.578586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.973 BaseBdev2 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.973 [ 00:09:25.973 { 00:09:25.973 "name": "BaseBdev2", 00:09:25.973 "aliases": [ 00:09:25.973 "20a15522-2836-475e-8da7-0ea8b21727cb" 00:09:25.973 ], 00:09:25.973 "product_name": "Malloc disk", 00:09:25.973 "block_size": 512, 00:09:25.973 "num_blocks": 65536, 00:09:25.973 "uuid": "20a15522-2836-475e-8da7-0ea8b21727cb", 00:09:25.973 "assigned_rate_limits": { 00:09:25.973 "rw_ios_per_sec": 0, 00:09:25.973 "rw_mbytes_per_sec": 0, 00:09:25.973 "r_mbytes_per_sec": 0, 00:09:25.973 "w_mbytes_per_sec": 0 00:09:25.973 }, 00:09:25.973 "claimed": false, 00:09:25.973 "zoned": false, 00:09:25.973 "supported_io_types": { 00:09:25.973 "read": true, 00:09:25.973 "write": true, 00:09:25.973 "unmap": true, 00:09:25.973 "flush": true, 00:09:25.973 "reset": true, 00:09:25.973 "nvme_admin": false, 00:09:25.973 "nvme_io": false, 00:09:25.973 "nvme_io_md": false, 00:09:25.973 "write_zeroes": true, 00:09:25.973 "zcopy": true, 00:09:25.973 "get_zone_info": false, 00:09:25.973 "zone_management": false, 00:09:25.973 "zone_append": false, 00:09:25.973 "compare": false, 00:09:25.973 "compare_and_write": false, 00:09:25.973 "abort": true, 00:09:25.973 "seek_hole": false, 00:09:25.973 "seek_data": false, 00:09:25.973 "copy": true, 00:09:25.973 "nvme_iov_md": false 00:09:25.973 }, 00:09:25.973 "memory_domains": [ 00:09:25.973 { 00:09:25.973 "dma_device_id": "system", 00:09:25.973 "dma_device_type": 1 00:09:25.973 }, 00:09:25.973 { 00:09:25.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.973 "dma_device_type": 2 00:09:25.973 } 00:09:25.973 ], 00:09:25.973 "driver_specific": {} 00:09:25.973 } 00:09:25.973 ] 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.973 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.234 BaseBdev3 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.234 [ 00:09:26.234 { 00:09:26.234 "name": "BaseBdev3", 00:09:26.234 "aliases": [ 00:09:26.234 "dfeb064c-bb46-4cd2-bb6d-25c1b20cdf95" 00:09:26.234 ], 00:09:26.234 "product_name": "Malloc disk", 00:09:26.234 "block_size": 512, 00:09:26.234 "num_blocks": 65536, 00:09:26.234 "uuid": "dfeb064c-bb46-4cd2-bb6d-25c1b20cdf95", 00:09:26.234 "assigned_rate_limits": { 00:09:26.234 "rw_ios_per_sec": 0, 00:09:26.234 "rw_mbytes_per_sec": 0, 00:09:26.234 "r_mbytes_per_sec": 0, 00:09:26.234 "w_mbytes_per_sec": 0 00:09:26.234 }, 00:09:26.234 "claimed": false, 00:09:26.234 "zoned": false, 00:09:26.234 "supported_io_types": { 00:09:26.234 "read": true, 00:09:26.234 "write": true, 00:09:26.234 "unmap": true, 00:09:26.234 "flush": true, 00:09:26.234 "reset": true, 00:09:26.234 "nvme_admin": false, 00:09:26.234 "nvme_io": false, 00:09:26.234 "nvme_io_md": false, 00:09:26.234 "write_zeroes": true, 00:09:26.234 "zcopy": true, 00:09:26.234 "get_zone_info": false, 00:09:26.234 "zone_management": false, 00:09:26.234 "zone_append": false, 00:09:26.234 "compare": false, 00:09:26.234 "compare_and_write": false, 00:09:26.234 "abort": true, 00:09:26.234 "seek_hole": false, 00:09:26.234 "seek_data": false, 00:09:26.234 "copy": true, 00:09:26.234 "nvme_iov_md": false 00:09:26.234 }, 00:09:26.234 "memory_domains": [ 00:09:26.234 { 00:09:26.234 "dma_device_id": "system", 00:09:26.234 "dma_device_type": 1 00:09:26.234 }, 00:09:26.234 { 00:09:26.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.234 "dma_device_type": 2 00:09:26.234 } 00:09:26.234 ], 00:09:26.234 "driver_specific": {} 00:09:26.234 } 00:09:26.234 ] 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.234 BaseBdev4 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.234 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.235 [ 00:09:26.235 { 00:09:26.235 "name": "BaseBdev4", 00:09:26.235 "aliases": [ 00:09:26.235 "f1fb3921-c47e-4fbe-a617-4ef24c5bfa61" 00:09:26.235 ], 00:09:26.235 "product_name": "Malloc disk", 00:09:26.235 "block_size": 512, 00:09:26.235 "num_blocks": 65536, 00:09:26.235 "uuid": "f1fb3921-c47e-4fbe-a617-4ef24c5bfa61", 00:09:26.235 "assigned_rate_limits": { 00:09:26.235 "rw_ios_per_sec": 0, 00:09:26.235 "rw_mbytes_per_sec": 0, 00:09:26.235 "r_mbytes_per_sec": 0, 00:09:26.235 "w_mbytes_per_sec": 0 00:09:26.235 }, 00:09:26.235 "claimed": false, 00:09:26.235 "zoned": false, 00:09:26.235 "supported_io_types": { 00:09:26.235 "read": true, 00:09:26.235 "write": true, 00:09:26.235 "unmap": true, 00:09:26.235 "flush": true, 00:09:26.235 "reset": true, 00:09:26.235 "nvme_admin": false, 00:09:26.235 "nvme_io": false, 00:09:26.235 "nvme_io_md": false, 00:09:26.235 "write_zeroes": true, 00:09:26.235 "zcopy": true, 00:09:26.235 "get_zone_info": false, 00:09:26.235 "zone_management": false, 00:09:26.235 "zone_append": false, 00:09:26.235 "compare": false, 00:09:26.235 "compare_and_write": false, 00:09:26.235 "abort": true, 00:09:26.235 "seek_hole": false, 00:09:26.235 "seek_data": false, 00:09:26.235 "copy": true, 00:09:26.235 "nvme_iov_md": false 00:09:26.235 }, 00:09:26.235 "memory_domains": [ 00:09:26.235 { 00:09:26.235 "dma_device_id": "system", 00:09:26.235 "dma_device_type": 1 00:09:26.235 }, 00:09:26.235 { 00:09:26.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.235 "dma_device_type": 2 00:09:26.235 } 00:09:26.235 ], 00:09:26.235 "driver_specific": {} 00:09:26.235 } 00:09:26.235 ] 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.235 [2024-10-15 01:10:38.810588] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:26.235 [2024-10-15 01:10:38.810682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:26.235 [2024-10-15 01:10:38.810718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.235 [2024-10-15 01:10:38.812539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.235 [2024-10-15 01:10:38.812587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.235 "name": "Existed_Raid", 00:09:26.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.235 "strip_size_kb": 64, 00:09:26.235 "state": "configuring", 00:09:26.235 "raid_level": "raid0", 00:09:26.235 "superblock": false, 00:09:26.235 "num_base_bdevs": 4, 00:09:26.235 "num_base_bdevs_discovered": 3, 00:09:26.235 "num_base_bdevs_operational": 4, 00:09:26.235 "base_bdevs_list": [ 00:09:26.235 { 00:09:26.235 "name": "BaseBdev1", 00:09:26.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.235 "is_configured": false, 00:09:26.235 "data_offset": 0, 00:09:26.235 "data_size": 0 00:09:26.235 }, 00:09:26.235 { 00:09:26.235 "name": "BaseBdev2", 00:09:26.235 "uuid": "20a15522-2836-475e-8da7-0ea8b21727cb", 00:09:26.235 "is_configured": true, 00:09:26.235 "data_offset": 0, 00:09:26.235 "data_size": 65536 00:09:26.235 }, 00:09:26.235 { 00:09:26.235 "name": "BaseBdev3", 00:09:26.235 "uuid": "dfeb064c-bb46-4cd2-bb6d-25c1b20cdf95", 00:09:26.235 "is_configured": true, 00:09:26.235 "data_offset": 0, 00:09:26.235 "data_size": 65536 00:09:26.235 }, 00:09:26.235 { 00:09:26.235 "name": "BaseBdev4", 00:09:26.235 "uuid": "f1fb3921-c47e-4fbe-a617-4ef24c5bfa61", 00:09:26.235 "is_configured": true, 00:09:26.235 "data_offset": 0, 00:09:26.235 "data_size": 65536 00:09:26.235 } 00:09:26.235 ] 00:09:26.235 }' 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.235 01:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.806 [2024-10-15 01:10:39.289821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.806 "name": "Existed_Raid", 00:09:26.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.806 "strip_size_kb": 64, 00:09:26.806 "state": "configuring", 00:09:26.806 "raid_level": "raid0", 00:09:26.806 "superblock": false, 00:09:26.806 "num_base_bdevs": 4, 00:09:26.806 "num_base_bdevs_discovered": 2, 00:09:26.806 "num_base_bdevs_operational": 4, 00:09:26.806 "base_bdevs_list": [ 00:09:26.806 { 00:09:26.806 "name": "BaseBdev1", 00:09:26.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.806 "is_configured": false, 00:09:26.806 "data_offset": 0, 00:09:26.806 "data_size": 0 00:09:26.806 }, 00:09:26.806 { 00:09:26.806 "name": null, 00:09:26.806 "uuid": "20a15522-2836-475e-8da7-0ea8b21727cb", 00:09:26.806 "is_configured": false, 00:09:26.806 "data_offset": 0, 00:09:26.806 "data_size": 65536 00:09:26.806 }, 00:09:26.806 { 00:09:26.806 "name": "BaseBdev3", 00:09:26.806 "uuid": "dfeb064c-bb46-4cd2-bb6d-25c1b20cdf95", 00:09:26.806 "is_configured": true, 00:09:26.806 "data_offset": 0, 00:09:26.806 "data_size": 65536 00:09:26.806 }, 00:09:26.806 { 00:09:26.806 "name": "BaseBdev4", 00:09:26.806 "uuid": "f1fb3921-c47e-4fbe-a617-4ef24c5bfa61", 00:09:26.806 "is_configured": true, 00:09:26.806 "data_offset": 0, 00:09:26.806 "data_size": 65536 00:09:26.806 } 00:09:26.806 ] 00:09:26.806 }' 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.806 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.066 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.066 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:27.066 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.066 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.066 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.066 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:27.066 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:27.066 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.066 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.066 [2024-10-15 01:10:39.783879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.066 BaseBdev1 00:09:27.066 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.066 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:27.066 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:27.066 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:27.066 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:27.066 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:27.066 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:27.066 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:27.066 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.066 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.337 [ 00:09:27.337 { 00:09:27.337 "name": "BaseBdev1", 00:09:27.337 "aliases": [ 00:09:27.337 "4b67df00-a04f-4fe3-93e9-4d498c5d8ad2" 00:09:27.337 ], 00:09:27.337 "product_name": "Malloc disk", 00:09:27.337 "block_size": 512, 00:09:27.337 "num_blocks": 65536, 00:09:27.337 "uuid": "4b67df00-a04f-4fe3-93e9-4d498c5d8ad2", 00:09:27.337 "assigned_rate_limits": { 00:09:27.337 "rw_ios_per_sec": 0, 00:09:27.337 "rw_mbytes_per_sec": 0, 00:09:27.337 "r_mbytes_per_sec": 0, 00:09:27.337 "w_mbytes_per_sec": 0 00:09:27.337 }, 00:09:27.337 "claimed": true, 00:09:27.337 "claim_type": "exclusive_write", 00:09:27.337 "zoned": false, 00:09:27.337 "supported_io_types": { 00:09:27.337 "read": true, 00:09:27.337 "write": true, 00:09:27.337 "unmap": true, 00:09:27.337 "flush": true, 00:09:27.337 "reset": true, 00:09:27.337 "nvme_admin": false, 00:09:27.337 "nvme_io": false, 00:09:27.337 "nvme_io_md": false, 00:09:27.337 "write_zeroes": true, 00:09:27.337 "zcopy": true, 00:09:27.337 "get_zone_info": false, 00:09:27.337 "zone_management": false, 00:09:27.337 "zone_append": false, 00:09:27.337 "compare": false, 00:09:27.337 "compare_and_write": false, 00:09:27.337 "abort": true, 00:09:27.337 "seek_hole": false, 00:09:27.337 "seek_data": false, 00:09:27.337 "copy": true, 00:09:27.337 "nvme_iov_md": false 00:09:27.337 }, 00:09:27.337 "memory_domains": [ 00:09:27.337 { 00:09:27.337 "dma_device_id": "system", 00:09:27.337 "dma_device_type": 1 00:09:27.337 }, 00:09:27.337 { 00:09:27.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.337 "dma_device_type": 2 00:09:27.337 } 00:09:27.337 ], 00:09:27.337 "driver_specific": {} 00:09:27.337 } 00:09:27.337 ] 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.337 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.337 "name": "Existed_Raid", 00:09:27.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.337 "strip_size_kb": 64, 00:09:27.337 "state": "configuring", 00:09:27.337 "raid_level": "raid0", 00:09:27.337 "superblock": false, 00:09:27.337 "num_base_bdevs": 4, 00:09:27.337 "num_base_bdevs_discovered": 3, 00:09:27.337 "num_base_bdevs_operational": 4, 00:09:27.337 "base_bdevs_list": [ 00:09:27.337 { 00:09:27.337 "name": "BaseBdev1", 00:09:27.337 "uuid": "4b67df00-a04f-4fe3-93e9-4d498c5d8ad2", 00:09:27.337 "is_configured": true, 00:09:27.337 "data_offset": 0, 00:09:27.337 "data_size": 65536 00:09:27.337 }, 00:09:27.337 { 00:09:27.337 "name": null, 00:09:27.337 "uuid": "20a15522-2836-475e-8da7-0ea8b21727cb", 00:09:27.337 "is_configured": false, 00:09:27.338 "data_offset": 0, 00:09:27.338 "data_size": 65536 00:09:27.338 }, 00:09:27.338 { 00:09:27.338 "name": "BaseBdev3", 00:09:27.338 "uuid": "dfeb064c-bb46-4cd2-bb6d-25c1b20cdf95", 00:09:27.338 "is_configured": true, 00:09:27.338 "data_offset": 0, 00:09:27.338 "data_size": 65536 00:09:27.338 }, 00:09:27.338 { 00:09:27.338 "name": "BaseBdev4", 00:09:27.338 "uuid": "f1fb3921-c47e-4fbe-a617-4ef24c5bfa61", 00:09:27.338 "is_configured": true, 00:09:27.338 "data_offset": 0, 00:09:27.338 "data_size": 65536 00:09:27.338 } 00:09:27.338 ] 00:09:27.338 }' 00:09:27.338 01:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.338 01:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.598 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.598 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:27.598 01:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.598 01:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.598 01:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.858 [2024-10-15 01:10:40.331030] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.858 "name": "Existed_Raid", 00:09:27.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.858 "strip_size_kb": 64, 00:09:27.858 "state": "configuring", 00:09:27.858 "raid_level": "raid0", 00:09:27.858 "superblock": false, 00:09:27.858 "num_base_bdevs": 4, 00:09:27.858 "num_base_bdevs_discovered": 2, 00:09:27.858 "num_base_bdevs_operational": 4, 00:09:27.858 "base_bdevs_list": [ 00:09:27.858 { 00:09:27.858 "name": "BaseBdev1", 00:09:27.858 "uuid": "4b67df00-a04f-4fe3-93e9-4d498c5d8ad2", 00:09:27.858 "is_configured": true, 00:09:27.858 "data_offset": 0, 00:09:27.858 "data_size": 65536 00:09:27.858 }, 00:09:27.858 { 00:09:27.858 "name": null, 00:09:27.858 "uuid": "20a15522-2836-475e-8da7-0ea8b21727cb", 00:09:27.858 "is_configured": false, 00:09:27.858 "data_offset": 0, 00:09:27.858 "data_size": 65536 00:09:27.858 }, 00:09:27.858 { 00:09:27.858 "name": null, 00:09:27.858 "uuid": "dfeb064c-bb46-4cd2-bb6d-25c1b20cdf95", 00:09:27.858 "is_configured": false, 00:09:27.858 "data_offset": 0, 00:09:27.858 "data_size": 65536 00:09:27.858 }, 00:09:27.858 { 00:09:27.858 "name": "BaseBdev4", 00:09:27.858 "uuid": "f1fb3921-c47e-4fbe-a617-4ef24c5bfa61", 00:09:27.858 "is_configured": true, 00:09:27.858 "data_offset": 0, 00:09:27.858 "data_size": 65536 00:09:27.858 } 00:09:27.858 ] 00:09:27.858 }' 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.858 01:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.118 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:28.118 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.118 01:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.118 01:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.378 [2024-10-15 01:10:40.886095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.378 "name": "Existed_Raid", 00:09:28.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.378 "strip_size_kb": 64, 00:09:28.378 "state": "configuring", 00:09:28.378 "raid_level": "raid0", 00:09:28.378 "superblock": false, 00:09:28.378 "num_base_bdevs": 4, 00:09:28.378 "num_base_bdevs_discovered": 3, 00:09:28.378 "num_base_bdevs_operational": 4, 00:09:28.378 "base_bdevs_list": [ 00:09:28.378 { 00:09:28.378 "name": "BaseBdev1", 00:09:28.378 "uuid": "4b67df00-a04f-4fe3-93e9-4d498c5d8ad2", 00:09:28.378 "is_configured": true, 00:09:28.378 "data_offset": 0, 00:09:28.378 "data_size": 65536 00:09:28.378 }, 00:09:28.378 { 00:09:28.378 "name": null, 00:09:28.378 "uuid": "20a15522-2836-475e-8da7-0ea8b21727cb", 00:09:28.378 "is_configured": false, 00:09:28.378 "data_offset": 0, 00:09:28.378 "data_size": 65536 00:09:28.378 }, 00:09:28.378 { 00:09:28.378 "name": "BaseBdev3", 00:09:28.378 "uuid": "dfeb064c-bb46-4cd2-bb6d-25c1b20cdf95", 00:09:28.378 "is_configured": true, 00:09:28.378 "data_offset": 0, 00:09:28.378 "data_size": 65536 00:09:28.378 }, 00:09:28.378 { 00:09:28.378 "name": "BaseBdev4", 00:09:28.378 "uuid": "f1fb3921-c47e-4fbe-a617-4ef24c5bfa61", 00:09:28.378 "is_configured": true, 00:09:28.378 "data_offset": 0, 00:09:28.378 "data_size": 65536 00:09:28.378 } 00:09:28.378 ] 00:09:28.378 }' 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.378 01:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.638 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.638 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:28.638 01:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.638 01:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.638 01:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.898 [2024-10-15 01:10:41.373313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.898 "name": "Existed_Raid", 00:09:28.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.898 "strip_size_kb": 64, 00:09:28.898 "state": "configuring", 00:09:28.898 "raid_level": "raid0", 00:09:28.898 "superblock": false, 00:09:28.898 "num_base_bdevs": 4, 00:09:28.898 "num_base_bdevs_discovered": 2, 00:09:28.898 "num_base_bdevs_operational": 4, 00:09:28.898 "base_bdevs_list": [ 00:09:28.898 { 00:09:28.898 "name": null, 00:09:28.898 "uuid": "4b67df00-a04f-4fe3-93e9-4d498c5d8ad2", 00:09:28.898 "is_configured": false, 00:09:28.898 "data_offset": 0, 00:09:28.898 "data_size": 65536 00:09:28.898 }, 00:09:28.898 { 00:09:28.898 "name": null, 00:09:28.898 "uuid": "20a15522-2836-475e-8da7-0ea8b21727cb", 00:09:28.898 "is_configured": false, 00:09:28.898 "data_offset": 0, 00:09:28.898 "data_size": 65536 00:09:28.898 }, 00:09:28.898 { 00:09:28.898 "name": "BaseBdev3", 00:09:28.898 "uuid": "dfeb064c-bb46-4cd2-bb6d-25c1b20cdf95", 00:09:28.898 "is_configured": true, 00:09:28.898 "data_offset": 0, 00:09:28.898 "data_size": 65536 00:09:28.898 }, 00:09:28.898 { 00:09:28.898 "name": "BaseBdev4", 00:09:28.898 "uuid": "f1fb3921-c47e-4fbe-a617-4ef24c5bfa61", 00:09:28.898 "is_configured": true, 00:09:28.898 "data_offset": 0, 00:09:28.898 "data_size": 65536 00:09:28.898 } 00:09:28.898 ] 00:09:28.898 }' 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.898 01:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.158 [2024-10-15 01:10:41.803148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.158 "name": "Existed_Raid", 00:09:29.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.158 "strip_size_kb": 64, 00:09:29.158 "state": "configuring", 00:09:29.158 "raid_level": "raid0", 00:09:29.158 "superblock": false, 00:09:29.158 "num_base_bdevs": 4, 00:09:29.158 "num_base_bdevs_discovered": 3, 00:09:29.158 "num_base_bdevs_operational": 4, 00:09:29.158 "base_bdevs_list": [ 00:09:29.158 { 00:09:29.158 "name": null, 00:09:29.158 "uuid": "4b67df00-a04f-4fe3-93e9-4d498c5d8ad2", 00:09:29.158 "is_configured": false, 00:09:29.158 "data_offset": 0, 00:09:29.158 "data_size": 65536 00:09:29.158 }, 00:09:29.158 { 00:09:29.158 "name": "BaseBdev2", 00:09:29.158 "uuid": "20a15522-2836-475e-8da7-0ea8b21727cb", 00:09:29.158 "is_configured": true, 00:09:29.158 "data_offset": 0, 00:09:29.158 "data_size": 65536 00:09:29.158 }, 00:09:29.158 { 00:09:29.158 "name": "BaseBdev3", 00:09:29.158 "uuid": "dfeb064c-bb46-4cd2-bb6d-25c1b20cdf95", 00:09:29.158 "is_configured": true, 00:09:29.158 "data_offset": 0, 00:09:29.158 "data_size": 65536 00:09:29.158 }, 00:09:29.158 { 00:09:29.158 "name": "BaseBdev4", 00:09:29.158 "uuid": "f1fb3921-c47e-4fbe-a617-4ef24c5bfa61", 00:09:29.158 "is_configured": true, 00:09:29.158 "data_offset": 0, 00:09:29.158 "data_size": 65536 00:09:29.158 } 00:09:29.158 ] 00:09:29.158 }' 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.158 01:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4b67df00-a04f-4fe3-93e9-4d498c5d8ad2 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.728 NewBaseBdev 00:09:29.728 [2024-10-15 01:10:42.357290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:29.728 [2024-10-15 01:10:42.357335] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:29.728 [2024-10-15 01:10:42.357342] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:29.728 [2024-10-15 01:10:42.357632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:29.728 [2024-10-15 01:10:42.357742] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:29.728 [2024-10-15 01:10:42.357753] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:29.728 [2024-10-15 01:10:42.357922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.728 [ 00:09:29.728 { 00:09:29.728 "name": "NewBaseBdev", 00:09:29.728 "aliases": [ 00:09:29.728 "4b67df00-a04f-4fe3-93e9-4d498c5d8ad2" 00:09:29.728 ], 00:09:29.728 "product_name": "Malloc disk", 00:09:29.728 "block_size": 512, 00:09:29.728 "num_blocks": 65536, 00:09:29.728 "uuid": "4b67df00-a04f-4fe3-93e9-4d498c5d8ad2", 00:09:29.728 "assigned_rate_limits": { 00:09:29.728 "rw_ios_per_sec": 0, 00:09:29.728 "rw_mbytes_per_sec": 0, 00:09:29.728 "r_mbytes_per_sec": 0, 00:09:29.728 "w_mbytes_per_sec": 0 00:09:29.728 }, 00:09:29.728 "claimed": true, 00:09:29.728 "claim_type": "exclusive_write", 00:09:29.728 "zoned": false, 00:09:29.728 "supported_io_types": { 00:09:29.728 "read": true, 00:09:29.728 "write": true, 00:09:29.728 "unmap": true, 00:09:29.728 "flush": true, 00:09:29.728 "reset": true, 00:09:29.728 "nvme_admin": false, 00:09:29.728 "nvme_io": false, 00:09:29.728 "nvme_io_md": false, 00:09:29.728 "write_zeroes": true, 00:09:29.728 "zcopy": true, 00:09:29.728 "get_zone_info": false, 00:09:29.728 "zone_management": false, 00:09:29.728 "zone_append": false, 00:09:29.728 "compare": false, 00:09:29.728 "compare_and_write": false, 00:09:29.728 "abort": true, 00:09:29.728 "seek_hole": false, 00:09:29.728 "seek_data": false, 00:09:29.728 "copy": true, 00:09:29.728 "nvme_iov_md": false 00:09:29.728 }, 00:09:29.728 "memory_domains": [ 00:09:29.728 { 00:09:29.728 "dma_device_id": "system", 00:09:29.728 "dma_device_type": 1 00:09:29.728 }, 00:09:29.728 { 00:09:29.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.728 "dma_device_type": 2 00:09:29.728 } 00:09:29.728 ], 00:09:29.728 "driver_specific": {} 00:09:29.728 } 00:09:29.728 ] 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.728 "name": "Existed_Raid", 00:09:29.728 "uuid": "67e4d30e-1f39-433a-b96f-67f69c81e67b", 00:09:29.728 "strip_size_kb": 64, 00:09:29.728 "state": "online", 00:09:29.728 "raid_level": "raid0", 00:09:29.728 "superblock": false, 00:09:29.728 "num_base_bdevs": 4, 00:09:29.728 "num_base_bdevs_discovered": 4, 00:09:29.728 "num_base_bdevs_operational": 4, 00:09:29.728 "base_bdevs_list": [ 00:09:29.728 { 00:09:29.728 "name": "NewBaseBdev", 00:09:29.728 "uuid": "4b67df00-a04f-4fe3-93e9-4d498c5d8ad2", 00:09:29.728 "is_configured": true, 00:09:29.728 "data_offset": 0, 00:09:29.728 "data_size": 65536 00:09:29.728 }, 00:09:29.728 { 00:09:29.728 "name": "BaseBdev2", 00:09:29.728 "uuid": "20a15522-2836-475e-8da7-0ea8b21727cb", 00:09:29.728 "is_configured": true, 00:09:29.728 "data_offset": 0, 00:09:29.728 "data_size": 65536 00:09:29.728 }, 00:09:29.728 { 00:09:29.728 "name": "BaseBdev3", 00:09:29.728 "uuid": "dfeb064c-bb46-4cd2-bb6d-25c1b20cdf95", 00:09:29.728 "is_configured": true, 00:09:29.728 "data_offset": 0, 00:09:29.728 "data_size": 65536 00:09:29.728 }, 00:09:29.728 { 00:09:29.728 "name": "BaseBdev4", 00:09:29.728 "uuid": "f1fb3921-c47e-4fbe-a617-4ef24c5bfa61", 00:09:29.728 "is_configured": true, 00:09:29.728 "data_offset": 0, 00:09:29.728 "data_size": 65536 00:09:29.728 } 00:09:29.728 ] 00:09:29.728 }' 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.728 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.297 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:30.297 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:30.297 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:30.297 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:30.297 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:30.297 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:30.297 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:30.297 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.297 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.297 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:30.297 [2024-10-15 01:10:42.808891] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.297 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.297 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:30.297 "name": "Existed_Raid", 00:09:30.297 "aliases": [ 00:09:30.297 "67e4d30e-1f39-433a-b96f-67f69c81e67b" 00:09:30.297 ], 00:09:30.297 "product_name": "Raid Volume", 00:09:30.297 "block_size": 512, 00:09:30.297 "num_blocks": 262144, 00:09:30.297 "uuid": "67e4d30e-1f39-433a-b96f-67f69c81e67b", 00:09:30.297 "assigned_rate_limits": { 00:09:30.297 "rw_ios_per_sec": 0, 00:09:30.297 "rw_mbytes_per_sec": 0, 00:09:30.297 "r_mbytes_per_sec": 0, 00:09:30.297 "w_mbytes_per_sec": 0 00:09:30.297 }, 00:09:30.297 "claimed": false, 00:09:30.297 "zoned": false, 00:09:30.297 "supported_io_types": { 00:09:30.297 "read": true, 00:09:30.297 "write": true, 00:09:30.297 "unmap": true, 00:09:30.297 "flush": true, 00:09:30.297 "reset": true, 00:09:30.297 "nvme_admin": false, 00:09:30.297 "nvme_io": false, 00:09:30.297 "nvme_io_md": false, 00:09:30.297 "write_zeroes": true, 00:09:30.297 "zcopy": false, 00:09:30.297 "get_zone_info": false, 00:09:30.297 "zone_management": false, 00:09:30.297 "zone_append": false, 00:09:30.297 "compare": false, 00:09:30.297 "compare_and_write": false, 00:09:30.297 "abort": false, 00:09:30.298 "seek_hole": false, 00:09:30.298 "seek_data": false, 00:09:30.298 "copy": false, 00:09:30.298 "nvme_iov_md": false 00:09:30.298 }, 00:09:30.298 "memory_domains": [ 00:09:30.298 { 00:09:30.298 "dma_device_id": "system", 00:09:30.298 "dma_device_type": 1 00:09:30.298 }, 00:09:30.298 { 00:09:30.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.298 "dma_device_type": 2 00:09:30.298 }, 00:09:30.298 { 00:09:30.298 "dma_device_id": "system", 00:09:30.298 "dma_device_type": 1 00:09:30.298 }, 00:09:30.298 { 00:09:30.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.298 "dma_device_type": 2 00:09:30.298 }, 00:09:30.298 { 00:09:30.298 "dma_device_id": "system", 00:09:30.298 "dma_device_type": 1 00:09:30.298 }, 00:09:30.298 { 00:09:30.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.298 "dma_device_type": 2 00:09:30.298 }, 00:09:30.298 { 00:09:30.298 "dma_device_id": "system", 00:09:30.298 "dma_device_type": 1 00:09:30.298 }, 00:09:30.298 { 00:09:30.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.298 "dma_device_type": 2 00:09:30.298 } 00:09:30.298 ], 00:09:30.298 "driver_specific": { 00:09:30.298 "raid": { 00:09:30.298 "uuid": "67e4d30e-1f39-433a-b96f-67f69c81e67b", 00:09:30.298 "strip_size_kb": 64, 00:09:30.298 "state": "online", 00:09:30.298 "raid_level": "raid0", 00:09:30.298 "superblock": false, 00:09:30.298 "num_base_bdevs": 4, 00:09:30.298 "num_base_bdevs_discovered": 4, 00:09:30.298 "num_base_bdevs_operational": 4, 00:09:30.298 "base_bdevs_list": [ 00:09:30.298 { 00:09:30.298 "name": "NewBaseBdev", 00:09:30.298 "uuid": "4b67df00-a04f-4fe3-93e9-4d498c5d8ad2", 00:09:30.298 "is_configured": true, 00:09:30.298 "data_offset": 0, 00:09:30.298 "data_size": 65536 00:09:30.298 }, 00:09:30.298 { 00:09:30.298 "name": "BaseBdev2", 00:09:30.298 "uuid": "20a15522-2836-475e-8da7-0ea8b21727cb", 00:09:30.298 "is_configured": true, 00:09:30.298 "data_offset": 0, 00:09:30.298 "data_size": 65536 00:09:30.298 }, 00:09:30.298 { 00:09:30.298 "name": "BaseBdev3", 00:09:30.298 "uuid": "dfeb064c-bb46-4cd2-bb6d-25c1b20cdf95", 00:09:30.298 "is_configured": true, 00:09:30.298 "data_offset": 0, 00:09:30.298 "data_size": 65536 00:09:30.298 }, 00:09:30.298 { 00:09:30.298 "name": "BaseBdev4", 00:09:30.298 "uuid": "f1fb3921-c47e-4fbe-a617-4ef24c5bfa61", 00:09:30.298 "is_configured": true, 00:09:30.298 "data_offset": 0, 00:09:30.298 "data_size": 65536 00:09:30.298 } 00:09:30.298 ] 00:09:30.298 } 00:09:30.298 } 00:09:30.298 }' 00:09:30.298 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:30.298 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:30.298 BaseBdev2 00:09:30.298 BaseBdev3 00:09:30.298 BaseBdev4' 00:09:30.298 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.298 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:30.298 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.298 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.298 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:30.298 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.298 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.298 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.298 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.298 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.298 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.298 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:30.298 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.298 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.298 01:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.298 01:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.298 01:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.298 01:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.298 01:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.298 01:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.298 01:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:30.298 01:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.298 01:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.558 [2024-10-15 01:10:43.096050] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:30.558 [2024-10-15 01:10:43.096079] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.558 [2024-10-15 01:10:43.096156] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.558 [2024-10-15 01:10:43.096243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.558 [2024-10-15 01:10:43.096260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80093 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80093 ']' 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80093 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80093 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80093' 00:09:30.558 killing process with pid 80093 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 80093 00:09:30.558 [2024-10-15 01:10:43.143870] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.558 01:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 80093 00:09:30.558 [2024-10-15 01:10:43.183608] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:30.818 00:09:30.818 real 0m9.560s 00:09:30.818 user 0m16.398s 00:09:30.818 sys 0m1.952s 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.818 ************************************ 00:09:30.818 END TEST raid_state_function_test 00:09:30.818 ************************************ 00:09:30.818 01:10:43 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:30.818 01:10:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:30.818 01:10:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:30.818 01:10:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:30.818 ************************************ 00:09:30.818 START TEST raid_state_function_test_sb 00:09:30.818 ************************************ 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:30.818 Process raid pid: 80742 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80742 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80742' 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80742 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 80742 ']' 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.818 01:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.077 [2024-10-15 01:10:43.544707] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:09:31.077 [2024-10-15 01:10:43.544845] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.077 [2024-10-15 01:10:43.674622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.077 [2024-10-15 01:10:43.700109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.077 [2024-10-15 01:10:43.743306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.077 [2024-10-15 01:10:43.743336] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.013 [2024-10-15 01:10:44.389214] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.013 [2024-10-15 01:10:44.389275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.013 [2024-10-15 01:10:44.389285] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.013 [2024-10-15 01:10:44.389295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.013 [2024-10-15 01:10:44.389300] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.013 [2024-10-15 01:10:44.389311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.013 [2024-10-15 01:10:44.389317] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:32.013 [2024-10-15 01:10:44.389325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.013 "name": "Existed_Raid", 00:09:32.013 "uuid": "a65b6464-9855-4fca-85be-19b275cbbae5", 00:09:32.013 "strip_size_kb": 64, 00:09:32.013 "state": "configuring", 00:09:32.013 "raid_level": "raid0", 00:09:32.013 "superblock": true, 00:09:32.013 "num_base_bdevs": 4, 00:09:32.013 "num_base_bdevs_discovered": 0, 00:09:32.013 "num_base_bdevs_operational": 4, 00:09:32.013 "base_bdevs_list": [ 00:09:32.013 { 00:09:32.013 "name": "BaseBdev1", 00:09:32.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.013 "is_configured": false, 00:09:32.013 "data_offset": 0, 00:09:32.013 "data_size": 0 00:09:32.013 }, 00:09:32.013 { 00:09:32.013 "name": "BaseBdev2", 00:09:32.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.013 "is_configured": false, 00:09:32.013 "data_offset": 0, 00:09:32.013 "data_size": 0 00:09:32.013 }, 00:09:32.013 { 00:09:32.013 "name": "BaseBdev3", 00:09:32.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.013 "is_configured": false, 00:09:32.013 "data_offset": 0, 00:09:32.013 "data_size": 0 00:09:32.013 }, 00:09:32.013 { 00:09:32.013 "name": "BaseBdev4", 00:09:32.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.013 "is_configured": false, 00:09:32.013 "data_offset": 0, 00:09:32.013 "data_size": 0 00:09:32.013 } 00:09:32.013 ] 00:09:32.013 }' 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.013 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.272 [2024-10-15 01:10:44.840333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:32.272 [2024-10-15 01:10:44.840427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.272 [2024-10-15 01:10:44.852368] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.272 [2024-10-15 01:10:44.852447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.272 [2024-10-15 01:10:44.852491] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.272 [2024-10-15 01:10:44.852517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.272 [2024-10-15 01:10:44.852539] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.272 [2024-10-15 01:10:44.852563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.272 [2024-10-15 01:10:44.852584] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:32.272 [2024-10-15 01:10:44.852621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.272 [2024-10-15 01:10:44.873357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.272 BaseBdev1 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.272 [ 00:09:32.272 { 00:09:32.272 "name": "BaseBdev1", 00:09:32.272 "aliases": [ 00:09:32.272 "0979f029-6881-49b7-b661-cdc9ce7d18ba" 00:09:32.272 ], 00:09:32.272 "product_name": "Malloc disk", 00:09:32.272 "block_size": 512, 00:09:32.272 "num_blocks": 65536, 00:09:32.272 "uuid": "0979f029-6881-49b7-b661-cdc9ce7d18ba", 00:09:32.272 "assigned_rate_limits": { 00:09:32.272 "rw_ios_per_sec": 0, 00:09:32.272 "rw_mbytes_per_sec": 0, 00:09:32.272 "r_mbytes_per_sec": 0, 00:09:32.272 "w_mbytes_per_sec": 0 00:09:32.272 }, 00:09:32.272 "claimed": true, 00:09:32.272 "claim_type": "exclusive_write", 00:09:32.272 "zoned": false, 00:09:32.272 "supported_io_types": { 00:09:32.272 "read": true, 00:09:32.272 "write": true, 00:09:32.272 "unmap": true, 00:09:32.272 "flush": true, 00:09:32.272 "reset": true, 00:09:32.272 "nvme_admin": false, 00:09:32.272 "nvme_io": false, 00:09:32.272 "nvme_io_md": false, 00:09:32.272 "write_zeroes": true, 00:09:32.272 "zcopy": true, 00:09:32.272 "get_zone_info": false, 00:09:32.272 "zone_management": false, 00:09:32.272 "zone_append": false, 00:09:32.272 "compare": false, 00:09:32.272 "compare_and_write": false, 00:09:32.272 "abort": true, 00:09:32.272 "seek_hole": false, 00:09:32.272 "seek_data": false, 00:09:32.272 "copy": true, 00:09:32.272 "nvme_iov_md": false 00:09:32.272 }, 00:09:32.272 "memory_domains": [ 00:09:32.272 { 00:09:32.272 "dma_device_id": "system", 00:09:32.272 "dma_device_type": 1 00:09:32.272 }, 00:09:32.272 { 00:09:32.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.272 "dma_device_type": 2 00:09:32.272 } 00:09:32.272 ], 00:09:32.272 "driver_specific": {} 00:09:32.272 } 00:09:32.272 ] 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.272 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.273 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.273 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.273 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.273 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.273 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.273 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.273 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.273 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.273 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.273 "name": "Existed_Raid", 00:09:32.273 "uuid": "1088332a-8148-472e-86a2-7ba770a83470", 00:09:32.273 "strip_size_kb": 64, 00:09:32.273 "state": "configuring", 00:09:32.273 "raid_level": "raid0", 00:09:32.273 "superblock": true, 00:09:32.273 "num_base_bdevs": 4, 00:09:32.273 "num_base_bdevs_discovered": 1, 00:09:32.273 "num_base_bdevs_operational": 4, 00:09:32.273 "base_bdevs_list": [ 00:09:32.273 { 00:09:32.273 "name": "BaseBdev1", 00:09:32.273 "uuid": "0979f029-6881-49b7-b661-cdc9ce7d18ba", 00:09:32.273 "is_configured": true, 00:09:32.273 "data_offset": 2048, 00:09:32.273 "data_size": 63488 00:09:32.273 }, 00:09:32.273 { 00:09:32.273 "name": "BaseBdev2", 00:09:32.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.273 "is_configured": false, 00:09:32.273 "data_offset": 0, 00:09:32.273 "data_size": 0 00:09:32.273 }, 00:09:32.273 { 00:09:32.273 "name": "BaseBdev3", 00:09:32.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.273 "is_configured": false, 00:09:32.273 "data_offset": 0, 00:09:32.273 "data_size": 0 00:09:32.273 }, 00:09:32.273 { 00:09:32.273 "name": "BaseBdev4", 00:09:32.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.273 "is_configured": false, 00:09:32.273 "data_offset": 0, 00:09:32.273 "data_size": 0 00:09:32.273 } 00:09:32.273 ] 00:09:32.273 }' 00:09:32.273 01:10:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.273 01:10:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.840 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:32.840 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.840 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.840 [2024-10-15 01:10:45.296695] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:32.840 [2024-10-15 01:10:45.296770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:32.840 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.840 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:32.840 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.840 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.840 [2024-10-15 01:10:45.308766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.840 [2024-10-15 01:10:45.310701] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.840 [2024-10-15 01:10:45.310776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.840 [2024-10-15 01:10:45.310804] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.840 [2024-10-15 01:10:45.310826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.840 [2024-10-15 01:10:45.310844] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:32.840 [2024-10-15 01:10:45.310864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:32.840 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.841 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:32.841 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:32.841 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:32.841 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.841 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.841 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.841 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.841 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.841 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.841 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.841 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.841 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.841 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.841 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.841 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.841 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.841 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.841 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.841 "name": "Existed_Raid", 00:09:32.841 "uuid": "743c8f1b-6f57-45dc-be99-6a5f2d984d5a", 00:09:32.841 "strip_size_kb": 64, 00:09:32.841 "state": "configuring", 00:09:32.841 "raid_level": "raid0", 00:09:32.841 "superblock": true, 00:09:32.841 "num_base_bdevs": 4, 00:09:32.841 "num_base_bdevs_discovered": 1, 00:09:32.841 "num_base_bdevs_operational": 4, 00:09:32.841 "base_bdevs_list": [ 00:09:32.841 { 00:09:32.841 "name": "BaseBdev1", 00:09:32.841 "uuid": "0979f029-6881-49b7-b661-cdc9ce7d18ba", 00:09:32.841 "is_configured": true, 00:09:32.841 "data_offset": 2048, 00:09:32.841 "data_size": 63488 00:09:32.841 }, 00:09:32.841 { 00:09:32.841 "name": "BaseBdev2", 00:09:32.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.841 "is_configured": false, 00:09:32.841 "data_offset": 0, 00:09:32.841 "data_size": 0 00:09:32.841 }, 00:09:32.841 { 00:09:32.841 "name": "BaseBdev3", 00:09:32.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.841 "is_configured": false, 00:09:32.841 "data_offset": 0, 00:09:32.841 "data_size": 0 00:09:32.841 }, 00:09:32.841 { 00:09:32.841 "name": "BaseBdev4", 00:09:32.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.841 "is_configured": false, 00:09:32.841 "data_offset": 0, 00:09:32.841 "data_size": 0 00:09:32.841 } 00:09:32.841 ] 00:09:32.841 }' 00:09:32.841 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.841 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.100 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:33.100 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.100 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.100 [2024-10-15 01:10:45.778977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.100 BaseBdev2 00:09:33.100 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.100 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:33.100 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:33.100 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:33.100 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:33.100 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:33.100 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:33.100 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:33.100 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.100 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.100 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.101 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:33.101 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.101 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.101 [ 00:09:33.101 { 00:09:33.101 "name": "BaseBdev2", 00:09:33.101 "aliases": [ 00:09:33.101 "f70e513f-d019-4f6d-9780-2df811acb19b" 00:09:33.101 ], 00:09:33.101 "product_name": "Malloc disk", 00:09:33.101 "block_size": 512, 00:09:33.101 "num_blocks": 65536, 00:09:33.101 "uuid": "f70e513f-d019-4f6d-9780-2df811acb19b", 00:09:33.101 "assigned_rate_limits": { 00:09:33.101 "rw_ios_per_sec": 0, 00:09:33.101 "rw_mbytes_per_sec": 0, 00:09:33.101 "r_mbytes_per_sec": 0, 00:09:33.101 "w_mbytes_per_sec": 0 00:09:33.101 }, 00:09:33.101 "claimed": true, 00:09:33.101 "claim_type": "exclusive_write", 00:09:33.101 "zoned": false, 00:09:33.101 "supported_io_types": { 00:09:33.101 "read": true, 00:09:33.101 "write": true, 00:09:33.101 "unmap": true, 00:09:33.101 "flush": true, 00:09:33.101 "reset": true, 00:09:33.101 "nvme_admin": false, 00:09:33.101 "nvme_io": false, 00:09:33.101 "nvme_io_md": false, 00:09:33.101 "write_zeroes": true, 00:09:33.101 "zcopy": true, 00:09:33.101 "get_zone_info": false, 00:09:33.101 "zone_management": false, 00:09:33.101 "zone_append": false, 00:09:33.101 "compare": false, 00:09:33.101 "compare_and_write": false, 00:09:33.101 "abort": true, 00:09:33.101 "seek_hole": false, 00:09:33.101 "seek_data": false, 00:09:33.101 "copy": true, 00:09:33.101 "nvme_iov_md": false 00:09:33.101 }, 00:09:33.101 "memory_domains": [ 00:09:33.101 { 00:09:33.101 "dma_device_id": "system", 00:09:33.101 "dma_device_type": 1 00:09:33.101 }, 00:09:33.101 { 00:09:33.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.101 "dma_device_type": 2 00:09:33.101 } 00:09:33.101 ], 00:09:33.101 "driver_specific": {} 00:09:33.101 } 00:09:33.101 ] 00:09:33.101 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.101 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:33.101 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:33.101 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.101 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:33.101 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.101 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.101 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.101 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.101 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.101 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.101 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.101 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.101 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.359 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.359 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.359 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.359 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.359 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.359 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.359 "name": "Existed_Raid", 00:09:33.359 "uuid": "743c8f1b-6f57-45dc-be99-6a5f2d984d5a", 00:09:33.359 "strip_size_kb": 64, 00:09:33.359 "state": "configuring", 00:09:33.359 "raid_level": "raid0", 00:09:33.359 "superblock": true, 00:09:33.359 "num_base_bdevs": 4, 00:09:33.359 "num_base_bdevs_discovered": 2, 00:09:33.359 "num_base_bdevs_operational": 4, 00:09:33.359 "base_bdevs_list": [ 00:09:33.359 { 00:09:33.359 "name": "BaseBdev1", 00:09:33.359 "uuid": "0979f029-6881-49b7-b661-cdc9ce7d18ba", 00:09:33.359 "is_configured": true, 00:09:33.359 "data_offset": 2048, 00:09:33.359 "data_size": 63488 00:09:33.359 }, 00:09:33.360 { 00:09:33.360 "name": "BaseBdev2", 00:09:33.360 "uuid": "f70e513f-d019-4f6d-9780-2df811acb19b", 00:09:33.360 "is_configured": true, 00:09:33.360 "data_offset": 2048, 00:09:33.360 "data_size": 63488 00:09:33.360 }, 00:09:33.360 { 00:09:33.360 "name": "BaseBdev3", 00:09:33.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.360 "is_configured": false, 00:09:33.360 "data_offset": 0, 00:09:33.360 "data_size": 0 00:09:33.360 }, 00:09:33.360 { 00:09:33.360 "name": "BaseBdev4", 00:09:33.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.360 "is_configured": false, 00:09:33.360 "data_offset": 0, 00:09:33.360 "data_size": 0 00:09:33.360 } 00:09:33.360 ] 00:09:33.360 }' 00:09:33.360 01:10:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.360 01:10:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.618 [2024-10-15 01:10:46.284693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:33.618 BaseBdev3 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.618 [ 00:09:33.618 { 00:09:33.618 "name": "BaseBdev3", 00:09:33.618 "aliases": [ 00:09:33.618 "e11233ed-3b46-4774-9423-0ee67375c5f9" 00:09:33.618 ], 00:09:33.618 "product_name": "Malloc disk", 00:09:33.618 "block_size": 512, 00:09:33.618 "num_blocks": 65536, 00:09:33.618 "uuid": "e11233ed-3b46-4774-9423-0ee67375c5f9", 00:09:33.618 "assigned_rate_limits": { 00:09:33.618 "rw_ios_per_sec": 0, 00:09:33.618 "rw_mbytes_per_sec": 0, 00:09:33.618 "r_mbytes_per_sec": 0, 00:09:33.618 "w_mbytes_per_sec": 0 00:09:33.618 }, 00:09:33.618 "claimed": true, 00:09:33.618 "claim_type": "exclusive_write", 00:09:33.618 "zoned": false, 00:09:33.618 "supported_io_types": { 00:09:33.618 "read": true, 00:09:33.618 "write": true, 00:09:33.618 "unmap": true, 00:09:33.618 "flush": true, 00:09:33.618 "reset": true, 00:09:33.618 "nvme_admin": false, 00:09:33.618 "nvme_io": false, 00:09:33.618 "nvme_io_md": false, 00:09:33.618 "write_zeroes": true, 00:09:33.618 "zcopy": true, 00:09:33.618 "get_zone_info": false, 00:09:33.618 "zone_management": false, 00:09:33.618 "zone_append": false, 00:09:33.618 "compare": false, 00:09:33.618 "compare_and_write": false, 00:09:33.618 "abort": true, 00:09:33.618 "seek_hole": false, 00:09:33.618 "seek_data": false, 00:09:33.618 "copy": true, 00:09:33.618 "nvme_iov_md": false 00:09:33.618 }, 00:09:33.618 "memory_domains": [ 00:09:33.618 { 00:09:33.618 "dma_device_id": "system", 00:09:33.618 "dma_device_type": 1 00:09:33.618 }, 00:09:33.618 { 00:09:33.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.618 "dma_device_type": 2 00:09:33.618 } 00:09:33.618 ], 00:09:33.618 "driver_specific": {} 00:09:33.618 } 00:09:33.618 ] 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.618 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.619 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.619 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.619 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.619 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.619 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.619 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.619 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.619 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.619 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.877 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.877 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.877 "name": "Existed_Raid", 00:09:33.877 "uuid": "743c8f1b-6f57-45dc-be99-6a5f2d984d5a", 00:09:33.877 "strip_size_kb": 64, 00:09:33.877 "state": "configuring", 00:09:33.877 "raid_level": "raid0", 00:09:33.877 "superblock": true, 00:09:33.877 "num_base_bdevs": 4, 00:09:33.877 "num_base_bdevs_discovered": 3, 00:09:33.877 "num_base_bdevs_operational": 4, 00:09:33.877 "base_bdevs_list": [ 00:09:33.877 { 00:09:33.877 "name": "BaseBdev1", 00:09:33.877 "uuid": "0979f029-6881-49b7-b661-cdc9ce7d18ba", 00:09:33.877 "is_configured": true, 00:09:33.877 "data_offset": 2048, 00:09:33.877 "data_size": 63488 00:09:33.877 }, 00:09:33.877 { 00:09:33.877 "name": "BaseBdev2", 00:09:33.877 "uuid": "f70e513f-d019-4f6d-9780-2df811acb19b", 00:09:33.877 "is_configured": true, 00:09:33.877 "data_offset": 2048, 00:09:33.877 "data_size": 63488 00:09:33.877 }, 00:09:33.877 { 00:09:33.877 "name": "BaseBdev3", 00:09:33.877 "uuid": "e11233ed-3b46-4774-9423-0ee67375c5f9", 00:09:33.877 "is_configured": true, 00:09:33.877 "data_offset": 2048, 00:09:33.877 "data_size": 63488 00:09:33.877 }, 00:09:33.877 { 00:09:33.877 "name": "BaseBdev4", 00:09:33.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.877 "is_configured": false, 00:09:33.877 "data_offset": 0, 00:09:33.877 "data_size": 0 00:09:33.877 } 00:09:33.877 ] 00:09:33.877 }' 00:09:33.877 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.877 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.136 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:34.136 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.136 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.136 [2024-10-15 01:10:46.751121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:34.136 [2024-10-15 01:10:46.751347] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:34.136 [2024-10-15 01:10:46.751366] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:34.136 BaseBdev4 00:09:34.136 [2024-10-15 01:10:46.751688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:34.136 [2024-10-15 01:10:46.751821] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:34.136 [2024-10-15 01:10:46.751833] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:34.136 [2024-10-15 01:10:46.751967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.136 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.136 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:34.136 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:34.136 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:34.136 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:34.136 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:34.136 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:34.136 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.137 [ 00:09:34.137 { 00:09:34.137 "name": "BaseBdev4", 00:09:34.137 "aliases": [ 00:09:34.137 "cdcf1a08-a3b4-4d3c-b80a-7aec35856815" 00:09:34.137 ], 00:09:34.137 "product_name": "Malloc disk", 00:09:34.137 "block_size": 512, 00:09:34.137 "num_blocks": 65536, 00:09:34.137 "uuid": "cdcf1a08-a3b4-4d3c-b80a-7aec35856815", 00:09:34.137 "assigned_rate_limits": { 00:09:34.137 "rw_ios_per_sec": 0, 00:09:34.137 "rw_mbytes_per_sec": 0, 00:09:34.137 "r_mbytes_per_sec": 0, 00:09:34.137 "w_mbytes_per_sec": 0 00:09:34.137 }, 00:09:34.137 "claimed": true, 00:09:34.137 "claim_type": "exclusive_write", 00:09:34.137 "zoned": false, 00:09:34.137 "supported_io_types": { 00:09:34.137 "read": true, 00:09:34.137 "write": true, 00:09:34.137 "unmap": true, 00:09:34.137 "flush": true, 00:09:34.137 "reset": true, 00:09:34.137 "nvme_admin": false, 00:09:34.137 "nvme_io": false, 00:09:34.137 "nvme_io_md": false, 00:09:34.137 "write_zeroes": true, 00:09:34.137 "zcopy": true, 00:09:34.137 "get_zone_info": false, 00:09:34.137 "zone_management": false, 00:09:34.137 "zone_append": false, 00:09:34.137 "compare": false, 00:09:34.137 "compare_and_write": false, 00:09:34.137 "abort": true, 00:09:34.137 "seek_hole": false, 00:09:34.137 "seek_data": false, 00:09:34.137 "copy": true, 00:09:34.137 "nvme_iov_md": false 00:09:34.137 }, 00:09:34.137 "memory_domains": [ 00:09:34.137 { 00:09:34.137 "dma_device_id": "system", 00:09:34.137 "dma_device_type": 1 00:09:34.137 }, 00:09:34.137 { 00:09:34.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.137 "dma_device_type": 2 00:09:34.137 } 00:09:34.137 ], 00:09:34.137 "driver_specific": {} 00:09:34.137 } 00:09:34.137 ] 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.137 "name": "Existed_Raid", 00:09:34.137 "uuid": "743c8f1b-6f57-45dc-be99-6a5f2d984d5a", 00:09:34.137 "strip_size_kb": 64, 00:09:34.137 "state": "online", 00:09:34.137 "raid_level": "raid0", 00:09:34.137 "superblock": true, 00:09:34.137 "num_base_bdevs": 4, 00:09:34.137 "num_base_bdevs_discovered": 4, 00:09:34.137 "num_base_bdevs_operational": 4, 00:09:34.137 "base_bdevs_list": [ 00:09:34.137 { 00:09:34.137 "name": "BaseBdev1", 00:09:34.137 "uuid": "0979f029-6881-49b7-b661-cdc9ce7d18ba", 00:09:34.137 "is_configured": true, 00:09:34.137 "data_offset": 2048, 00:09:34.137 "data_size": 63488 00:09:34.137 }, 00:09:34.137 { 00:09:34.137 "name": "BaseBdev2", 00:09:34.137 "uuid": "f70e513f-d019-4f6d-9780-2df811acb19b", 00:09:34.137 "is_configured": true, 00:09:34.137 "data_offset": 2048, 00:09:34.137 "data_size": 63488 00:09:34.137 }, 00:09:34.137 { 00:09:34.137 "name": "BaseBdev3", 00:09:34.137 "uuid": "e11233ed-3b46-4774-9423-0ee67375c5f9", 00:09:34.137 "is_configured": true, 00:09:34.137 "data_offset": 2048, 00:09:34.137 "data_size": 63488 00:09:34.137 }, 00:09:34.137 { 00:09:34.137 "name": "BaseBdev4", 00:09:34.137 "uuid": "cdcf1a08-a3b4-4d3c-b80a-7aec35856815", 00:09:34.137 "is_configured": true, 00:09:34.137 "data_offset": 2048, 00:09:34.137 "data_size": 63488 00:09:34.137 } 00:09:34.137 ] 00:09:34.137 }' 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.137 01:10:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.705 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:34.705 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:34.705 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:34.705 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:34.705 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:34.705 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:34.705 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:34.705 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:34.705 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.705 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.705 [2024-10-15 01:10:47.226735] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.705 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.705 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:34.705 "name": "Existed_Raid", 00:09:34.705 "aliases": [ 00:09:34.705 "743c8f1b-6f57-45dc-be99-6a5f2d984d5a" 00:09:34.705 ], 00:09:34.705 "product_name": "Raid Volume", 00:09:34.705 "block_size": 512, 00:09:34.705 "num_blocks": 253952, 00:09:34.705 "uuid": "743c8f1b-6f57-45dc-be99-6a5f2d984d5a", 00:09:34.705 "assigned_rate_limits": { 00:09:34.705 "rw_ios_per_sec": 0, 00:09:34.705 "rw_mbytes_per_sec": 0, 00:09:34.705 "r_mbytes_per_sec": 0, 00:09:34.705 "w_mbytes_per_sec": 0 00:09:34.705 }, 00:09:34.705 "claimed": false, 00:09:34.705 "zoned": false, 00:09:34.705 "supported_io_types": { 00:09:34.705 "read": true, 00:09:34.705 "write": true, 00:09:34.705 "unmap": true, 00:09:34.705 "flush": true, 00:09:34.705 "reset": true, 00:09:34.705 "nvme_admin": false, 00:09:34.705 "nvme_io": false, 00:09:34.705 "nvme_io_md": false, 00:09:34.705 "write_zeroes": true, 00:09:34.705 "zcopy": false, 00:09:34.705 "get_zone_info": false, 00:09:34.705 "zone_management": false, 00:09:34.705 "zone_append": false, 00:09:34.705 "compare": false, 00:09:34.705 "compare_and_write": false, 00:09:34.705 "abort": false, 00:09:34.705 "seek_hole": false, 00:09:34.705 "seek_data": false, 00:09:34.705 "copy": false, 00:09:34.705 "nvme_iov_md": false 00:09:34.705 }, 00:09:34.705 "memory_domains": [ 00:09:34.705 { 00:09:34.705 "dma_device_id": "system", 00:09:34.705 "dma_device_type": 1 00:09:34.705 }, 00:09:34.705 { 00:09:34.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.705 "dma_device_type": 2 00:09:34.706 }, 00:09:34.706 { 00:09:34.706 "dma_device_id": "system", 00:09:34.706 "dma_device_type": 1 00:09:34.706 }, 00:09:34.706 { 00:09:34.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.706 "dma_device_type": 2 00:09:34.706 }, 00:09:34.706 { 00:09:34.706 "dma_device_id": "system", 00:09:34.706 "dma_device_type": 1 00:09:34.706 }, 00:09:34.706 { 00:09:34.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.706 "dma_device_type": 2 00:09:34.706 }, 00:09:34.706 { 00:09:34.706 "dma_device_id": "system", 00:09:34.706 "dma_device_type": 1 00:09:34.706 }, 00:09:34.706 { 00:09:34.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.706 "dma_device_type": 2 00:09:34.706 } 00:09:34.706 ], 00:09:34.706 "driver_specific": { 00:09:34.706 "raid": { 00:09:34.706 "uuid": "743c8f1b-6f57-45dc-be99-6a5f2d984d5a", 00:09:34.706 "strip_size_kb": 64, 00:09:34.706 "state": "online", 00:09:34.706 "raid_level": "raid0", 00:09:34.706 "superblock": true, 00:09:34.706 "num_base_bdevs": 4, 00:09:34.706 "num_base_bdevs_discovered": 4, 00:09:34.706 "num_base_bdevs_operational": 4, 00:09:34.706 "base_bdevs_list": [ 00:09:34.706 { 00:09:34.706 "name": "BaseBdev1", 00:09:34.706 "uuid": "0979f029-6881-49b7-b661-cdc9ce7d18ba", 00:09:34.706 "is_configured": true, 00:09:34.706 "data_offset": 2048, 00:09:34.706 "data_size": 63488 00:09:34.706 }, 00:09:34.706 { 00:09:34.706 "name": "BaseBdev2", 00:09:34.706 "uuid": "f70e513f-d019-4f6d-9780-2df811acb19b", 00:09:34.706 "is_configured": true, 00:09:34.706 "data_offset": 2048, 00:09:34.706 "data_size": 63488 00:09:34.706 }, 00:09:34.706 { 00:09:34.706 "name": "BaseBdev3", 00:09:34.706 "uuid": "e11233ed-3b46-4774-9423-0ee67375c5f9", 00:09:34.706 "is_configured": true, 00:09:34.706 "data_offset": 2048, 00:09:34.706 "data_size": 63488 00:09:34.706 }, 00:09:34.706 { 00:09:34.706 "name": "BaseBdev4", 00:09:34.706 "uuid": "cdcf1a08-a3b4-4d3c-b80a-7aec35856815", 00:09:34.706 "is_configured": true, 00:09:34.706 "data_offset": 2048, 00:09:34.706 "data_size": 63488 00:09:34.706 } 00:09:34.706 ] 00:09:34.706 } 00:09:34.706 } 00:09:34.706 }' 00:09:34.706 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:34.706 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:34.706 BaseBdev2 00:09:34.706 BaseBdev3 00:09:34.706 BaseBdev4' 00:09:34.706 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.706 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:34.706 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.706 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.706 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:34.706 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.706 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.706 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.706 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.706 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.706 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.706 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:34.706 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.706 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.706 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.706 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.706 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.706 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.706 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.966 [2024-10-15 01:10:47.537922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:34.966 [2024-10-15 01:10:47.537953] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.966 [2024-10-15 01:10:47.538003] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.966 "name": "Existed_Raid", 00:09:34.966 "uuid": "743c8f1b-6f57-45dc-be99-6a5f2d984d5a", 00:09:34.966 "strip_size_kb": 64, 00:09:34.966 "state": "offline", 00:09:34.966 "raid_level": "raid0", 00:09:34.966 "superblock": true, 00:09:34.966 "num_base_bdevs": 4, 00:09:34.966 "num_base_bdevs_discovered": 3, 00:09:34.966 "num_base_bdevs_operational": 3, 00:09:34.966 "base_bdevs_list": [ 00:09:34.966 { 00:09:34.966 "name": null, 00:09:34.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.966 "is_configured": false, 00:09:34.966 "data_offset": 0, 00:09:34.966 "data_size": 63488 00:09:34.966 }, 00:09:34.966 { 00:09:34.966 "name": "BaseBdev2", 00:09:34.966 "uuid": "f70e513f-d019-4f6d-9780-2df811acb19b", 00:09:34.966 "is_configured": true, 00:09:34.966 "data_offset": 2048, 00:09:34.966 "data_size": 63488 00:09:34.966 }, 00:09:34.966 { 00:09:34.966 "name": "BaseBdev3", 00:09:34.966 "uuid": "e11233ed-3b46-4774-9423-0ee67375c5f9", 00:09:34.966 "is_configured": true, 00:09:34.966 "data_offset": 2048, 00:09:34.966 "data_size": 63488 00:09:34.966 }, 00:09:34.966 { 00:09:34.966 "name": "BaseBdev4", 00:09:34.966 "uuid": "cdcf1a08-a3b4-4d3c-b80a-7aec35856815", 00:09:34.966 "is_configured": true, 00:09:34.966 "data_offset": 2048, 00:09:34.966 "data_size": 63488 00:09:34.966 } 00:09:34.966 ] 00:09:34.966 }' 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.966 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.536 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:35.536 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.536 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.536 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.536 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.536 01:10:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:35.536 01:10:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.536 [2024-10-15 01:10:48.012489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.536 [2024-10-15 01:10:48.075604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.536 [2024-10-15 01:10:48.146562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:35.536 [2024-10-15 01:10:48.146604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.536 BaseBdev2 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.536 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.536 [ 00:09:35.536 { 00:09:35.536 "name": "BaseBdev2", 00:09:35.536 "aliases": [ 00:09:35.536 "594a6ab2-92bf-455e-a6c1-d8c1d43f5de3" 00:09:35.536 ], 00:09:35.536 "product_name": "Malloc disk", 00:09:35.536 "block_size": 512, 00:09:35.536 "num_blocks": 65536, 00:09:35.536 "uuid": "594a6ab2-92bf-455e-a6c1-d8c1d43f5de3", 00:09:35.536 "assigned_rate_limits": { 00:09:35.536 "rw_ios_per_sec": 0, 00:09:35.536 "rw_mbytes_per_sec": 0, 00:09:35.536 "r_mbytes_per_sec": 0, 00:09:35.536 "w_mbytes_per_sec": 0 00:09:35.536 }, 00:09:35.536 "claimed": false, 00:09:35.536 "zoned": false, 00:09:35.536 "supported_io_types": { 00:09:35.536 "read": true, 00:09:35.536 "write": true, 00:09:35.536 "unmap": true, 00:09:35.536 "flush": true, 00:09:35.536 "reset": true, 00:09:35.536 "nvme_admin": false, 00:09:35.536 "nvme_io": false, 00:09:35.536 "nvme_io_md": false, 00:09:35.536 "write_zeroes": true, 00:09:35.536 "zcopy": true, 00:09:35.536 "get_zone_info": false, 00:09:35.536 "zone_management": false, 00:09:35.536 "zone_append": false, 00:09:35.536 "compare": false, 00:09:35.536 "compare_and_write": false, 00:09:35.536 "abort": true, 00:09:35.536 "seek_hole": false, 00:09:35.536 "seek_data": false, 00:09:35.536 "copy": true, 00:09:35.537 "nvme_iov_md": false 00:09:35.537 }, 00:09:35.537 "memory_domains": [ 00:09:35.537 { 00:09:35.537 "dma_device_id": "system", 00:09:35.537 "dma_device_type": 1 00:09:35.537 }, 00:09:35.537 { 00:09:35.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.537 "dma_device_type": 2 00:09:35.537 } 00:09:35.537 ], 00:09:35.537 "driver_specific": {} 00:09:35.537 } 00:09:35.537 ] 00:09:35.537 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.537 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:35.537 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:35.537 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:35.537 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:35.537 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.537 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.797 BaseBdev3 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.797 [ 00:09:35.797 { 00:09:35.797 "name": "BaseBdev3", 00:09:35.797 "aliases": [ 00:09:35.797 "a8d19418-297d-41a1-a591-71a7c34196ed" 00:09:35.797 ], 00:09:35.797 "product_name": "Malloc disk", 00:09:35.797 "block_size": 512, 00:09:35.797 "num_blocks": 65536, 00:09:35.797 "uuid": "a8d19418-297d-41a1-a591-71a7c34196ed", 00:09:35.797 "assigned_rate_limits": { 00:09:35.797 "rw_ios_per_sec": 0, 00:09:35.797 "rw_mbytes_per_sec": 0, 00:09:35.797 "r_mbytes_per_sec": 0, 00:09:35.797 "w_mbytes_per_sec": 0 00:09:35.797 }, 00:09:35.797 "claimed": false, 00:09:35.797 "zoned": false, 00:09:35.797 "supported_io_types": { 00:09:35.797 "read": true, 00:09:35.797 "write": true, 00:09:35.797 "unmap": true, 00:09:35.797 "flush": true, 00:09:35.797 "reset": true, 00:09:35.797 "nvme_admin": false, 00:09:35.797 "nvme_io": false, 00:09:35.797 "nvme_io_md": false, 00:09:35.797 "write_zeroes": true, 00:09:35.797 "zcopy": true, 00:09:35.797 "get_zone_info": false, 00:09:35.797 "zone_management": false, 00:09:35.797 "zone_append": false, 00:09:35.797 "compare": false, 00:09:35.797 "compare_and_write": false, 00:09:35.797 "abort": true, 00:09:35.797 "seek_hole": false, 00:09:35.797 "seek_data": false, 00:09:35.797 "copy": true, 00:09:35.797 "nvme_iov_md": false 00:09:35.797 }, 00:09:35.797 "memory_domains": [ 00:09:35.797 { 00:09:35.797 "dma_device_id": "system", 00:09:35.797 "dma_device_type": 1 00:09:35.797 }, 00:09:35.797 { 00:09:35.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.797 "dma_device_type": 2 00:09:35.797 } 00:09:35.797 ], 00:09:35.797 "driver_specific": {} 00:09:35.797 } 00:09:35.797 ] 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.797 BaseBdev4 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.797 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.797 [ 00:09:35.797 { 00:09:35.797 "name": "BaseBdev4", 00:09:35.797 "aliases": [ 00:09:35.797 "b2c41c03-ec05-4fae-979c-ad31829e45dd" 00:09:35.797 ], 00:09:35.797 "product_name": "Malloc disk", 00:09:35.797 "block_size": 512, 00:09:35.797 "num_blocks": 65536, 00:09:35.797 "uuid": "b2c41c03-ec05-4fae-979c-ad31829e45dd", 00:09:35.797 "assigned_rate_limits": { 00:09:35.797 "rw_ios_per_sec": 0, 00:09:35.797 "rw_mbytes_per_sec": 0, 00:09:35.797 "r_mbytes_per_sec": 0, 00:09:35.797 "w_mbytes_per_sec": 0 00:09:35.797 }, 00:09:35.797 "claimed": false, 00:09:35.797 "zoned": false, 00:09:35.797 "supported_io_types": { 00:09:35.797 "read": true, 00:09:35.797 "write": true, 00:09:35.797 "unmap": true, 00:09:35.797 "flush": true, 00:09:35.797 "reset": true, 00:09:35.797 "nvme_admin": false, 00:09:35.797 "nvme_io": false, 00:09:35.797 "nvme_io_md": false, 00:09:35.797 "write_zeroes": true, 00:09:35.797 "zcopy": true, 00:09:35.797 "get_zone_info": false, 00:09:35.797 "zone_management": false, 00:09:35.797 "zone_append": false, 00:09:35.797 "compare": false, 00:09:35.797 "compare_and_write": false, 00:09:35.797 "abort": true, 00:09:35.797 "seek_hole": false, 00:09:35.797 "seek_data": false, 00:09:35.797 "copy": true, 00:09:35.797 "nvme_iov_md": false 00:09:35.797 }, 00:09:35.797 "memory_domains": [ 00:09:35.797 { 00:09:35.797 "dma_device_id": "system", 00:09:35.797 "dma_device_type": 1 00:09:35.797 }, 00:09:35.797 { 00:09:35.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.797 "dma_device_type": 2 00:09:35.797 } 00:09:35.797 ], 00:09:35.797 "driver_specific": {} 00:09:35.798 } 00:09:35.798 ] 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.798 [2024-10-15 01:10:48.366292] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:35.798 [2024-10-15 01:10:48.366332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:35.798 [2024-10-15 01:10:48.366376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.798 [2024-10-15 01:10:48.368147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:35.798 [2024-10-15 01:10:48.368210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.798 "name": "Existed_Raid", 00:09:35.798 "uuid": "5304ff88-fbdf-4589-affe-3dff352a9b70", 00:09:35.798 "strip_size_kb": 64, 00:09:35.798 "state": "configuring", 00:09:35.798 "raid_level": "raid0", 00:09:35.798 "superblock": true, 00:09:35.798 "num_base_bdevs": 4, 00:09:35.798 "num_base_bdevs_discovered": 3, 00:09:35.798 "num_base_bdevs_operational": 4, 00:09:35.798 "base_bdevs_list": [ 00:09:35.798 { 00:09:35.798 "name": "BaseBdev1", 00:09:35.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.798 "is_configured": false, 00:09:35.798 "data_offset": 0, 00:09:35.798 "data_size": 0 00:09:35.798 }, 00:09:35.798 { 00:09:35.798 "name": "BaseBdev2", 00:09:35.798 "uuid": "594a6ab2-92bf-455e-a6c1-d8c1d43f5de3", 00:09:35.798 "is_configured": true, 00:09:35.798 "data_offset": 2048, 00:09:35.798 "data_size": 63488 00:09:35.798 }, 00:09:35.798 { 00:09:35.798 "name": "BaseBdev3", 00:09:35.798 "uuid": "a8d19418-297d-41a1-a591-71a7c34196ed", 00:09:35.798 "is_configured": true, 00:09:35.798 "data_offset": 2048, 00:09:35.798 "data_size": 63488 00:09:35.798 }, 00:09:35.798 { 00:09:35.798 "name": "BaseBdev4", 00:09:35.798 "uuid": "b2c41c03-ec05-4fae-979c-ad31829e45dd", 00:09:35.798 "is_configured": true, 00:09:35.798 "data_offset": 2048, 00:09:35.798 "data_size": 63488 00:09:35.798 } 00:09:35.798 ] 00:09:35.798 }' 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.798 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.367 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:36.368 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.368 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.368 [2024-10-15 01:10:48.789568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:36.368 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.368 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:36.368 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.368 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.368 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.368 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.368 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.368 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.368 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.368 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.368 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.368 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.368 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.368 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.368 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.368 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.368 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.368 "name": "Existed_Raid", 00:09:36.368 "uuid": "5304ff88-fbdf-4589-affe-3dff352a9b70", 00:09:36.368 "strip_size_kb": 64, 00:09:36.368 "state": "configuring", 00:09:36.368 "raid_level": "raid0", 00:09:36.368 "superblock": true, 00:09:36.368 "num_base_bdevs": 4, 00:09:36.368 "num_base_bdevs_discovered": 2, 00:09:36.368 "num_base_bdevs_operational": 4, 00:09:36.368 "base_bdevs_list": [ 00:09:36.368 { 00:09:36.368 "name": "BaseBdev1", 00:09:36.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.368 "is_configured": false, 00:09:36.368 "data_offset": 0, 00:09:36.368 "data_size": 0 00:09:36.368 }, 00:09:36.368 { 00:09:36.368 "name": null, 00:09:36.368 "uuid": "594a6ab2-92bf-455e-a6c1-d8c1d43f5de3", 00:09:36.368 "is_configured": false, 00:09:36.368 "data_offset": 0, 00:09:36.368 "data_size": 63488 00:09:36.368 }, 00:09:36.368 { 00:09:36.368 "name": "BaseBdev3", 00:09:36.368 "uuid": "a8d19418-297d-41a1-a591-71a7c34196ed", 00:09:36.368 "is_configured": true, 00:09:36.368 "data_offset": 2048, 00:09:36.368 "data_size": 63488 00:09:36.368 }, 00:09:36.368 { 00:09:36.368 "name": "BaseBdev4", 00:09:36.368 "uuid": "b2c41c03-ec05-4fae-979c-ad31829e45dd", 00:09:36.368 "is_configured": true, 00:09:36.368 "data_offset": 2048, 00:09:36.368 "data_size": 63488 00:09:36.368 } 00:09:36.368 ] 00:09:36.368 }' 00:09:36.368 01:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.368 01:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.628 [2024-10-15 01:10:49.267766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:36.628 BaseBdev1 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.628 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.628 [ 00:09:36.628 { 00:09:36.628 "name": "BaseBdev1", 00:09:36.628 "aliases": [ 00:09:36.628 "4f778f0c-4e31-45f3-9d5b-c45e38bce19e" 00:09:36.628 ], 00:09:36.628 "product_name": "Malloc disk", 00:09:36.628 "block_size": 512, 00:09:36.628 "num_blocks": 65536, 00:09:36.628 "uuid": "4f778f0c-4e31-45f3-9d5b-c45e38bce19e", 00:09:36.628 "assigned_rate_limits": { 00:09:36.628 "rw_ios_per_sec": 0, 00:09:36.628 "rw_mbytes_per_sec": 0, 00:09:36.628 "r_mbytes_per_sec": 0, 00:09:36.628 "w_mbytes_per_sec": 0 00:09:36.628 }, 00:09:36.628 "claimed": true, 00:09:36.628 "claim_type": "exclusive_write", 00:09:36.628 "zoned": false, 00:09:36.628 "supported_io_types": { 00:09:36.628 "read": true, 00:09:36.628 "write": true, 00:09:36.628 "unmap": true, 00:09:36.628 "flush": true, 00:09:36.628 "reset": true, 00:09:36.628 "nvme_admin": false, 00:09:36.628 "nvme_io": false, 00:09:36.628 "nvme_io_md": false, 00:09:36.628 "write_zeroes": true, 00:09:36.628 "zcopy": true, 00:09:36.628 "get_zone_info": false, 00:09:36.628 "zone_management": false, 00:09:36.628 "zone_append": false, 00:09:36.629 "compare": false, 00:09:36.629 "compare_and_write": false, 00:09:36.629 "abort": true, 00:09:36.629 "seek_hole": false, 00:09:36.629 "seek_data": false, 00:09:36.629 "copy": true, 00:09:36.629 "nvme_iov_md": false 00:09:36.629 }, 00:09:36.629 "memory_domains": [ 00:09:36.629 { 00:09:36.629 "dma_device_id": "system", 00:09:36.629 "dma_device_type": 1 00:09:36.629 }, 00:09:36.629 { 00:09:36.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.629 "dma_device_type": 2 00:09:36.629 } 00:09:36.629 ], 00:09:36.629 "driver_specific": {} 00:09:36.629 } 00:09:36.629 ] 00:09:36.629 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.629 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:36.629 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:36.629 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.629 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.629 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.629 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.629 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.629 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.629 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.629 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.629 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.629 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.629 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.629 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.629 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.629 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.888 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.888 "name": "Existed_Raid", 00:09:36.888 "uuid": "5304ff88-fbdf-4589-affe-3dff352a9b70", 00:09:36.888 "strip_size_kb": 64, 00:09:36.888 "state": "configuring", 00:09:36.888 "raid_level": "raid0", 00:09:36.888 "superblock": true, 00:09:36.888 "num_base_bdevs": 4, 00:09:36.888 "num_base_bdevs_discovered": 3, 00:09:36.888 "num_base_bdevs_operational": 4, 00:09:36.888 "base_bdevs_list": [ 00:09:36.889 { 00:09:36.889 "name": "BaseBdev1", 00:09:36.889 "uuid": "4f778f0c-4e31-45f3-9d5b-c45e38bce19e", 00:09:36.889 "is_configured": true, 00:09:36.889 "data_offset": 2048, 00:09:36.889 "data_size": 63488 00:09:36.889 }, 00:09:36.889 { 00:09:36.889 "name": null, 00:09:36.889 "uuid": "594a6ab2-92bf-455e-a6c1-d8c1d43f5de3", 00:09:36.889 "is_configured": false, 00:09:36.889 "data_offset": 0, 00:09:36.889 "data_size": 63488 00:09:36.889 }, 00:09:36.889 { 00:09:36.889 "name": "BaseBdev3", 00:09:36.889 "uuid": "a8d19418-297d-41a1-a591-71a7c34196ed", 00:09:36.889 "is_configured": true, 00:09:36.889 "data_offset": 2048, 00:09:36.889 "data_size": 63488 00:09:36.889 }, 00:09:36.889 { 00:09:36.889 "name": "BaseBdev4", 00:09:36.889 "uuid": "b2c41c03-ec05-4fae-979c-ad31829e45dd", 00:09:36.889 "is_configured": true, 00:09:36.889 "data_offset": 2048, 00:09:36.889 "data_size": 63488 00:09:36.889 } 00:09:36.889 ] 00:09:36.889 }' 00:09:36.889 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.889 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.149 [2024-10-15 01:10:49.814927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.149 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.408 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.408 "name": "Existed_Raid", 00:09:37.408 "uuid": "5304ff88-fbdf-4589-affe-3dff352a9b70", 00:09:37.408 "strip_size_kb": 64, 00:09:37.408 "state": "configuring", 00:09:37.408 "raid_level": "raid0", 00:09:37.408 "superblock": true, 00:09:37.408 "num_base_bdevs": 4, 00:09:37.408 "num_base_bdevs_discovered": 2, 00:09:37.408 "num_base_bdevs_operational": 4, 00:09:37.408 "base_bdevs_list": [ 00:09:37.408 { 00:09:37.408 "name": "BaseBdev1", 00:09:37.408 "uuid": "4f778f0c-4e31-45f3-9d5b-c45e38bce19e", 00:09:37.408 "is_configured": true, 00:09:37.408 "data_offset": 2048, 00:09:37.408 "data_size": 63488 00:09:37.408 }, 00:09:37.408 { 00:09:37.408 "name": null, 00:09:37.408 "uuid": "594a6ab2-92bf-455e-a6c1-d8c1d43f5de3", 00:09:37.408 "is_configured": false, 00:09:37.408 "data_offset": 0, 00:09:37.408 "data_size": 63488 00:09:37.408 }, 00:09:37.408 { 00:09:37.408 "name": null, 00:09:37.408 "uuid": "a8d19418-297d-41a1-a591-71a7c34196ed", 00:09:37.408 "is_configured": false, 00:09:37.408 "data_offset": 0, 00:09:37.408 "data_size": 63488 00:09:37.408 }, 00:09:37.408 { 00:09:37.408 "name": "BaseBdev4", 00:09:37.408 "uuid": "b2c41c03-ec05-4fae-979c-ad31829e45dd", 00:09:37.408 "is_configured": true, 00:09:37.408 "data_offset": 2048, 00:09:37.408 "data_size": 63488 00:09:37.408 } 00:09:37.408 ] 00:09:37.408 }' 00:09:37.408 01:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.408 01:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.668 [2024-10-15 01:10:50.310108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.668 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.668 "name": "Existed_Raid", 00:09:37.668 "uuid": "5304ff88-fbdf-4589-affe-3dff352a9b70", 00:09:37.668 "strip_size_kb": 64, 00:09:37.668 "state": "configuring", 00:09:37.668 "raid_level": "raid0", 00:09:37.668 "superblock": true, 00:09:37.668 "num_base_bdevs": 4, 00:09:37.668 "num_base_bdevs_discovered": 3, 00:09:37.668 "num_base_bdevs_operational": 4, 00:09:37.668 "base_bdevs_list": [ 00:09:37.668 { 00:09:37.668 "name": "BaseBdev1", 00:09:37.668 "uuid": "4f778f0c-4e31-45f3-9d5b-c45e38bce19e", 00:09:37.668 "is_configured": true, 00:09:37.668 "data_offset": 2048, 00:09:37.668 "data_size": 63488 00:09:37.668 }, 00:09:37.668 { 00:09:37.668 "name": null, 00:09:37.668 "uuid": "594a6ab2-92bf-455e-a6c1-d8c1d43f5de3", 00:09:37.668 "is_configured": false, 00:09:37.668 "data_offset": 0, 00:09:37.668 "data_size": 63488 00:09:37.668 }, 00:09:37.668 { 00:09:37.668 "name": "BaseBdev3", 00:09:37.668 "uuid": "a8d19418-297d-41a1-a591-71a7c34196ed", 00:09:37.668 "is_configured": true, 00:09:37.668 "data_offset": 2048, 00:09:37.668 "data_size": 63488 00:09:37.668 }, 00:09:37.668 { 00:09:37.668 "name": "BaseBdev4", 00:09:37.668 "uuid": "b2c41c03-ec05-4fae-979c-ad31829e45dd", 00:09:37.668 "is_configured": true, 00:09:37.668 "data_offset": 2048, 00:09:37.668 "data_size": 63488 00:09:37.669 } 00:09:37.669 ] 00:09:37.669 }' 00:09:37.669 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.669 01:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.254 [2024-10-15 01:10:50.857199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.254 "name": "Existed_Raid", 00:09:38.254 "uuid": "5304ff88-fbdf-4589-affe-3dff352a9b70", 00:09:38.254 "strip_size_kb": 64, 00:09:38.254 "state": "configuring", 00:09:38.254 "raid_level": "raid0", 00:09:38.254 "superblock": true, 00:09:38.254 "num_base_bdevs": 4, 00:09:38.254 "num_base_bdevs_discovered": 2, 00:09:38.254 "num_base_bdevs_operational": 4, 00:09:38.254 "base_bdevs_list": [ 00:09:38.254 { 00:09:38.254 "name": null, 00:09:38.254 "uuid": "4f778f0c-4e31-45f3-9d5b-c45e38bce19e", 00:09:38.254 "is_configured": false, 00:09:38.254 "data_offset": 0, 00:09:38.254 "data_size": 63488 00:09:38.254 }, 00:09:38.254 { 00:09:38.254 "name": null, 00:09:38.254 "uuid": "594a6ab2-92bf-455e-a6c1-d8c1d43f5de3", 00:09:38.254 "is_configured": false, 00:09:38.254 "data_offset": 0, 00:09:38.254 "data_size": 63488 00:09:38.254 }, 00:09:38.254 { 00:09:38.254 "name": "BaseBdev3", 00:09:38.254 "uuid": "a8d19418-297d-41a1-a591-71a7c34196ed", 00:09:38.254 "is_configured": true, 00:09:38.254 "data_offset": 2048, 00:09:38.254 "data_size": 63488 00:09:38.254 }, 00:09:38.254 { 00:09:38.254 "name": "BaseBdev4", 00:09:38.254 "uuid": "b2c41c03-ec05-4fae-979c-ad31829e45dd", 00:09:38.254 "is_configured": true, 00:09:38.254 "data_offset": 2048, 00:09:38.254 "data_size": 63488 00:09:38.254 } 00:09:38.254 ] 00:09:38.254 }' 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.254 01:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.826 [2024-10-15 01:10:51.374755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.826 "name": "Existed_Raid", 00:09:38.826 "uuid": "5304ff88-fbdf-4589-affe-3dff352a9b70", 00:09:38.826 "strip_size_kb": 64, 00:09:38.826 "state": "configuring", 00:09:38.826 "raid_level": "raid0", 00:09:38.826 "superblock": true, 00:09:38.826 "num_base_bdevs": 4, 00:09:38.826 "num_base_bdevs_discovered": 3, 00:09:38.826 "num_base_bdevs_operational": 4, 00:09:38.826 "base_bdevs_list": [ 00:09:38.826 { 00:09:38.826 "name": null, 00:09:38.826 "uuid": "4f778f0c-4e31-45f3-9d5b-c45e38bce19e", 00:09:38.826 "is_configured": false, 00:09:38.826 "data_offset": 0, 00:09:38.826 "data_size": 63488 00:09:38.826 }, 00:09:38.826 { 00:09:38.826 "name": "BaseBdev2", 00:09:38.826 "uuid": "594a6ab2-92bf-455e-a6c1-d8c1d43f5de3", 00:09:38.826 "is_configured": true, 00:09:38.826 "data_offset": 2048, 00:09:38.826 "data_size": 63488 00:09:38.826 }, 00:09:38.826 { 00:09:38.826 "name": "BaseBdev3", 00:09:38.826 "uuid": "a8d19418-297d-41a1-a591-71a7c34196ed", 00:09:38.826 "is_configured": true, 00:09:38.826 "data_offset": 2048, 00:09:38.826 "data_size": 63488 00:09:38.826 }, 00:09:38.826 { 00:09:38.826 "name": "BaseBdev4", 00:09:38.826 "uuid": "b2c41c03-ec05-4fae-979c-ad31829e45dd", 00:09:38.826 "is_configured": true, 00:09:38.826 "data_offset": 2048, 00:09:38.826 "data_size": 63488 00:09:38.826 } 00:09:38.826 ] 00:09:38.826 }' 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.826 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.085 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.085 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.085 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.085 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:39.085 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4f778f0c-4e31-45f3-9d5b-c45e38bce19e 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.345 [2024-10-15 01:10:51.892835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:39.345 [2024-10-15 01:10:51.893013] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:39.345 [2024-10-15 01:10:51.893032] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:39.345 NewBaseBdev 00:09:39.345 [2024-10-15 01:10:51.893312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:39.345 [2024-10-15 01:10:51.893425] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:39.345 [2024-10-15 01:10:51.893436] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:39.345 [2024-10-15 01:10:51.893526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.345 [ 00:09:39.345 { 00:09:39.345 "name": "NewBaseBdev", 00:09:39.345 "aliases": [ 00:09:39.345 "4f778f0c-4e31-45f3-9d5b-c45e38bce19e" 00:09:39.345 ], 00:09:39.345 "product_name": "Malloc disk", 00:09:39.345 "block_size": 512, 00:09:39.345 "num_blocks": 65536, 00:09:39.345 "uuid": "4f778f0c-4e31-45f3-9d5b-c45e38bce19e", 00:09:39.345 "assigned_rate_limits": { 00:09:39.345 "rw_ios_per_sec": 0, 00:09:39.345 "rw_mbytes_per_sec": 0, 00:09:39.345 "r_mbytes_per_sec": 0, 00:09:39.345 "w_mbytes_per_sec": 0 00:09:39.345 }, 00:09:39.345 "claimed": true, 00:09:39.345 "claim_type": "exclusive_write", 00:09:39.345 "zoned": false, 00:09:39.345 "supported_io_types": { 00:09:39.345 "read": true, 00:09:39.345 "write": true, 00:09:39.345 "unmap": true, 00:09:39.345 "flush": true, 00:09:39.345 "reset": true, 00:09:39.345 "nvme_admin": false, 00:09:39.345 "nvme_io": false, 00:09:39.345 "nvme_io_md": false, 00:09:39.345 "write_zeroes": true, 00:09:39.345 "zcopy": true, 00:09:39.345 "get_zone_info": false, 00:09:39.345 "zone_management": false, 00:09:39.345 "zone_append": false, 00:09:39.345 "compare": false, 00:09:39.345 "compare_and_write": false, 00:09:39.345 "abort": true, 00:09:39.345 "seek_hole": false, 00:09:39.345 "seek_data": false, 00:09:39.345 "copy": true, 00:09:39.345 "nvme_iov_md": false 00:09:39.345 }, 00:09:39.345 "memory_domains": [ 00:09:39.345 { 00:09:39.345 "dma_device_id": "system", 00:09:39.345 "dma_device_type": 1 00:09:39.345 }, 00:09:39.345 { 00:09:39.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.345 "dma_device_type": 2 00:09:39.345 } 00:09:39.345 ], 00:09:39.345 "driver_specific": {} 00:09:39.345 } 00:09:39.345 ] 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.345 "name": "Existed_Raid", 00:09:39.345 "uuid": "5304ff88-fbdf-4589-affe-3dff352a9b70", 00:09:39.345 "strip_size_kb": 64, 00:09:39.345 "state": "online", 00:09:39.345 "raid_level": "raid0", 00:09:39.345 "superblock": true, 00:09:39.345 "num_base_bdevs": 4, 00:09:39.345 "num_base_bdevs_discovered": 4, 00:09:39.345 "num_base_bdevs_operational": 4, 00:09:39.345 "base_bdevs_list": [ 00:09:39.345 { 00:09:39.345 "name": "NewBaseBdev", 00:09:39.345 "uuid": "4f778f0c-4e31-45f3-9d5b-c45e38bce19e", 00:09:39.345 "is_configured": true, 00:09:39.345 "data_offset": 2048, 00:09:39.345 "data_size": 63488 00:09:39.345 }, 00:09:39.345 { 00:09:39.345 "name": "BaseBdev2", 00:09:39.345 "uuid": "594a6ab2-92bf-455e-a6c1-d8c1d43f5de3", 00:09:39.345 "is_configured": true, 00:09:39.345 "data_offset": 2048, 00:09:39.345 "data_size": 63488 00:09:39.345 }, 00:09:39.345 { 00:09:39.345 "name": "BaseBdev3", 00:09:39.345 "uuid": "a8d19418-297d-41a1-a591-71a7c34196ed", 00:09:39.345 "is_configured": true, 00:09:39.345 "data_offset": 2048, 00:09:39.345 "data_size": 63488 00:09:39.345 }, 00:09:39.345 { 00:09:39.345 "name": "BaseBdev4", 00:09:39.345 "uuid": "b2c41c03-ec05-4fae-979c-ad31829e45dd", 00:09:39.345 "is_configured": true, 00:09:39.345 "data_offset": 2048, 00:09:39.345 "data_size": 63488 00:09:39.345 } 00:09:39.345 ] 00:09:39.345 }' 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.345 01:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.604 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:39.604 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:39.604 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:39.604 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:39.604 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:39.604 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:39.864 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:39.864 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:39.864 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.864 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.864 [2024-10-15 01:10:52.336519] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.864 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.864 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:39.864 "name": "Existed_Raid", 00:09:39.864 "aliases": [ 00:09:39.864 "5304ff88-fbdf-4589-affe-3dff352a9b70" 00:09:39.864 ], 00:09:39.864 "product_name": "Raid Volume", 00:09:39.864 "block_size": 512, 00:09:39.864 "num_blocks": 253952, 00:09:39.864 "uuid": "5304ff88-fbdf-4589-affe-3dff352a9b70", 00:09:39.864 "assigned_rate_limits": { 00:09:39.864 "rw_ios_per_sec": 0, 00:09:39.864 "rw_mbytes_per_sec": 0, 00:09:39.864 "r_mbytes_per_sec": 0, 00:09:39.864 "w_mbytes_per_sec": 0 00:09:39.864 }, 00:09:39.864 "claimed": false, 00:09:39.864 "zoned": false, 00:09:39.864 "supported_io_types": { 00:09:39.864 "read": true, 00:09:39.864 "write": true, 00:09:39.864 "unmap": true, 00:09:39.864 "flush": true, 00:09:39.864 "reset": true, 00:09:39.864 "nvme_admin": false, 00:09:39.864 "nvme_io": false, 00:09:39.864 "nvme_io_md": false, 00:09:39.864 "write_zeroes": true, 00:09:39.864 "zcopy": false, 00:09:39.864 "get_zone_info": false, 00:09:39.864 "zone_management": false, 00:09:39.864 "zone_append": false, 00:09:39.864 "compare": false, 00:09:39.864 "compare_and_write": false, 00:09:39.864 "abort": false, 00:09:39.864 "seek_hole": false, 00:09:39.864 "seek_data": false, 00:09:39.864 "copy": false, 00:09:39.864 "nvme_iov_md": false 00:09:39.864 }, 00:09:39.864 "memory_domains": [ 00:09:39.864 { 00:09:39.864 "dma_device_id": "system", 00:09:39.864 "dma_device_type": 1 00:09:39.864 }, 00:09:39.864 { 00:09:39.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.864 "dma_device_type": 2 00:09:39.864 }, 00:09:39.864 { 00:09:39.864 "dma_device_id": "system", 00:09:39.864 "dma_device_type": 1 00:09:39.864 }, 00:09:39.864 { 00:09:39.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.864 "dma_device_type": 2 00:09:39.864 }, 00:09:39.864 { 00:09:39.864 "dma_device_id": "system", 00:09:39.864 "dma_device_type": 1 00:09:39.864 }, 00:09:39.864 { 00:09:39.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.864 "dma_device_type": 2 00:09:39.864 }, 00:09:39.864 { 00:09:39.864 "dma_device_id": "system", 00:09:39.864 "dma_device_type": 1 00:09:39.864 }, 00:09:39.865 { 00:09:39.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.865 "dma_device_type": 2 00:09:39.865 } 00:09:39.865 ], 00:09:39.865 "driver_specific": { 00:09:39.865 "raid": { 00:09:39.865 "uuid": "5304ff88-fbdf-4589-affe-3dff352a9b70", 00:09:39.865 "strip_size_kb": 64, 00:09:39.865 "state": "online", 00:09:39.865 "raid_level": "raid0", 00:09:39.865 "superblock": true, 00:09:39.865 "num_base_bdevs": 4, 00:09:39.865 "num_base_bdevs_discovered": 4, 00:09:39.865 "num_base_bdevs_operational": 4, 00:09:39.865 "base_bdevs_list": [ 00:09:39.865 { 00:09:39.865 "name": "NewBaseBdev", 00:09:39.865 "uuid": "4f778f0c-4e31-45f3-9d5b-c45e38bce19e", 00:09:39.865 "is_configured": true, 00:09:39.865 "data_offset": 2048, 00:09:39.865 "data_size": 63488 00:09:39.865 }, 00:09:39.865 { 00:09:39.865 "name": "BaseBdev2", 00:09:39.865 "uuid": "594a6ab2-92bf-455e-a6c1-d8c1d43f5de3", 00:09:39.865 "is_configured": true, 00:09:39.865 "data_offset": 2048, 00:09:39.865 "data_size": 63488 00:09:39.865 }, 00:09:39.865 { 00:09:39.865 "name": "BaseBdev3", 00:09:39.865 "uuid": "a8d19418-297d-41a1-a591-71a7c34196ed", 00:09:39.865 "is_configured": true, 00:09:39.865 "data_offset": 2048, 00:09:39.865 "data_size": 63488 00:09:39.865 }, 00:09:39.865 { 00:09:39.865 "name": "BaseBdev4", 00:09:39.865 "uuid": "b2c41c03-ec05-4fae-979c-ad31829e45dd", 00:09:39.865 "is_configured": true, 00:09:39.865 "data_offset": 2048, 00:09:39.865 "data_size": 63488 00:09:39.865 } 00:09:39.865 ] 00:09:39.865 } 00:09:39.865 } 00:09:39.865 }' 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:39.865 BaseBdev2 00:09:39.865 BaseBdev3 00:09:39.865 BaseBdev4' 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.865 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.125 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.125 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.125 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.125 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.125 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.125 [2024-10-15 01:10:52.603690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.125 [2024-10-15 01:10:52.603719] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.125 [2024-10-15 01:10:52.603795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.125 [2024-10-15 01:10:52.603875] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.125 [2024-10-15 01:10:52.603899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:40.125 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.125 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80742 00:09:40.125 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 80742 ']' 00:09:40.125 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 80742 00:09:40.125 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:40.125 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:40.125 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80742 00:09:40.125 killing process with pid 80742 00:09:40.125 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:40.125 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:40.125 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80742' 00:09:40.125 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 80742 00:09:40.125 [2024-10-15 01:10:52.653238] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.125 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 80742 00:09:40.125 [2024-10-15 01:10:52.693435] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:40.386 01:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:40.386 00:09:40.386 real 0m9.445s 00:09:40.386 user 0m16.218s 00:09:40.386 sys 0m1.901s 00:09:40.386 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.386 01:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.386 ************************************ 00:09:40.386 END TEST raid_state_function_test_sb 00:09:40.386 ************************************ 00:09:40.386 01:10:52 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:09:40.386 01:10:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:40.386 01:10:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.386 01:10:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:40.386 ************************************ 00:09:40.386 START TEST raid_superblock_test 00:09:40.386 ************************************ 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81390 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81390 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81390 ']' 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:40.386 01:10:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.386 [2024-10-15 01:10:53.051799] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:09:40.386 [2024-10-15 01:10:53.051935] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81390 ] 00:09:40.646 [2024-10-15 01:10:53.195215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.646 [2024-10-15 01:10:53.222745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.646 [2024-10-15 01:10:53.265283] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.646 [2024-10-15 01:10:53.265322] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.217 malloc1 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.217 [2024-10-15 01:10:53.903739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:41.217 [2024-10-15 01:10:53.903804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.217 [2024-10-15 01:10:53.903824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:09:41.217 [2024-10-15 01:10:53.903834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.217 [2024-10-15 01:10:53.905947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.217 [2024-10-15 01:10:53.905988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:41.217 pt1 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.217 malloc2 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.217 [2024-10-15 01:10:53.932297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:41.217 [2024-10-15 01:10:53.932348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.217 [2024-10-15 01:10:53.932362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:41.217 [2024-10-15 01:10:53.932372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.217 [2024-10-15 01:10:53.934473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.217 [2024-10-15 01:10:53.934505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:41.217 pt2 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:41.217 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.478 malloc3 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.478 [2024-10-15 01:10:53.960894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:41.478 [2024-10-15 01:10:53.960961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.478 [2024-10-15 01:10:53.960976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:41.478 [2024-10-15 01:10:53.960986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.478 [2024-10-15 01:10:53.962969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.478 [2024-10-15 01:10:53.963007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:41.478 pt3 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.478 malloc4 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.478 01:10:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.478 [2024-10-15 01:10:53.999492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:41.478 [2024-10-15 01:10:53.999542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.478 [2024-10-15 01:10:53.999560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:41.478 [2024-10-15 01:10:53.999572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.478 [2024-10-15 01:10:54.001616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.478 [2024-10-15 01:10:54.001651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:41.478 pt4 00:09:41.478 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.478 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:41.478 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:41.478 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:41.478 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.478 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.478 [2024-10-15 01:10:54.011508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:41.478 [2024-10-15 01:10:54.013359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:41.478 [2024-10-15 01:10:54.013427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:41.478 [2024-10-15 01:10:54.013468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:41.478 [2024-10-15 01:10:54.013638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:09:41.479 [2024-10-15 01:10:54.013663] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:41.479 [2024-10-15 01:10:54.013927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:41.479 [2024-10-15 01:10:54.014077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:09:41.479 [2024-10-15 01:10:54.014094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:09:41.479 [2024-10-15 01:10:54.014250] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.479 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.479 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:41.479 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.479 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.479 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.479 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.479 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.479 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.479 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.479 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.479 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.479 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.479 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.479 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.479 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.479 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.479 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.479 "name": "raid_bdev1", 00:09:41.479 "uuid": "166ba11a-91fb-4380-ac70-6e58a936bf12", 00:09:41.479 "strip_size_kb": 64, 00:09:41.479 "state": "online", 00:09:41.479 "raid_level": "raid0", 00:09:41.479 "superblock": true, 00:09:41.479 "num_base_bdevs": 4, 00:09:41.479 "num_base_bdevs_discovered": 4, 00:09:41.479 "num_base_bdevs_operational": 4, 00:09:41.479 "base_bdevs_list": [ 00:09:41.479 { 00:09:41.479 "name": "pt1", 00:09:41.479 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.479 "is_configured": true, 00:09:41.479 "data_offset": 2048, 00:09:41.479 "data_size": 63488 00:09:41.479 }, 00:09:41.479 { 00:09:41.479 "name": "pt2", 00:09:41.479 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.479 "is_configured": true, 00:09:41.479 "data_offset": 2048, 00:09:41.479 "data_size": 63488 00:09:41.479 }, 00:09:41.479 { 00:09:41.479 "name": "pt3", 00:09:41.479 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.479 "is_configured": true, 00:09:41.479 "data_offset": 2048, 00:09:41.479 "data_size": 63488 00:09:41.479 }, 00:09:41.479 { 00:09:41.479 "name": "pt4", 00:09:41.479 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:41.479 "is_configured": true, 00:09:41.479 "data_offset": 2048, 00:09:41.479 "data_size": 63488 00:09:41.479 } 00:09:41.479 ] 00:09:41.479 }' 00:09:41.479 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.479 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.739 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:41.739 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:41.739 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.739 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.739 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.739 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.739 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.739 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:41.739 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.739 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.739 [2024-10-15 01:10:54.439116] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.999 "name": "raid_bdev1", 00:09:41.999 "aliases": [ 00:09:41.999 "166ba11a-91fb-4380-ac70-6e58a936bf12" 00:09:41.999 ], 00:09:41.999 "product_name": "Raid Volume", 00:09:41.999 "block_size": 512, 00:09:41.999 "num_blocks": 253952, 00:09:41.999 "uuid": "166ba11a-91fb-4380-ac70-6e58a936bf12", 00:09:41.999 "assigned_rate_limits": { 00:09:41.999 "rw_ios_per_sec": 0, 00:09:41.999 "rw_mbytes_per_sec": 0, 00:09:41.999 "r_mbytes_per_sec": 0, 00:09:41.999 "w_mbytes_per_sec": 0 00:09:41.999 }, 00:09:41.999 "claimed": false, 00:09:41.999 "zoned": false, 00:09:41.999 "supported_io_types": { 00:09:41.999 "read": true, 00:09:41.999 "write": true, 00:09:41.999 "unmap": true, 00:09:41.999 "flush": true, 00:09:41.999 "reset": true, 00:09:41.999 "nvme_admin": false, 00:09:41.999 "nvme_io": false, 00:09:41.999 "nvme_io_md": false, 00:09:41.999 "write_zeroes": true, 00:09:41.999 "zcopy": false, 00:09:41.999 "get_zone_info": false, 00:09:41.999 "zone_management": false, 00:09:41.999 "zone_append": false, 00:09:41.999 "compare": false, 00:09:41.999 "compare_and_write": false, 00:09:41.999 "abort": false, 00:09:41.999 "seek_hole": false, 00:09:41.999 "seek_data": false, 00:09:41.999 "copy": false, 00:09:41.999 "nvme_iov_md": false 00:09:41.999 }, 00:09:41.999 "memory_domains": [ 00:09:41.999 { 00:09:41.999 "dma_device_id": "system", 00:09:41.999 "dma_device_type": 1 00:09:41.999 }, 00:09:41.999 { 00:09:41.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.999 "dma_device_type": 2 00:09:41.999 }, 00:09:41.999 { 00:09:41.999 "dma_device_id": "system", 00:09:41.999 "dma_device_type": 1 00:09:41.999 }, 00:09:41.999 { 00:09:41.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.999 "dma_device_type": 2 00:09:41.999 }, 00:09:41.999 { 00:09:41.999 "dma_device_id": "system", 00:09:41.999 "dma_device_type": 1 00:09:41.999 }, 00:09:41.999 { 00:09:41.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.999 "dma_device_type": 2 00:09:41.999 }, 00:09:41.999 { 00:09:41.999 "dma_device_id": "system", 00:09:41.999 "dma_device_type": 1 00:09:41.999 }, 00:09:41.999 { 00:09:41.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.999 "dma_device_type": 2 00:09:41.999 } 00:09:41.999 ], 00:09:41.999 "driver_specific": { 00:09:41.999 "raid": { 00:09:41.999 "uuid": "166ba11a-91fb-4380-ac70-6e58a936bf12", 00:09:41.999 "strip_size_kb": 64, 00:09:41.999 "state": "online", 00:09:41.999 "raid_level": "raid0", 00:09:41.999 "superblock": true, 00:09:41.999 "num_base_bdevs": 4, 00:09:41.999 "num_base_bdevs_discovered": 4, 00:09:41.999 "num_base_bdevs_operational": 4, 00:09:41.999 "base_bdevs_list": [ 00:09:41.999 { 00:09:41.999 "name": "pt1", 00:09:41.999 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.999 "is_configured": true, 00:09:41.999 "data_offset": 2048, 00:09:41.999 "data_size": 63488 00:09:41.999 }, 00:09:41.999 { 00:09:41.999 "name": "pt2", 00:09:41.999 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.999 "is_configured": true, 00:09:41.999 "data_offset": 2048, 00:09:41.999 "data_size": 63488 00:09:41.999 }, 00:09:41.999 { 00:09:41.999 "name": "pt3", 00:09:41.999 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.999 "is_configured": true, 00:09:41.999 "data_offset": 2048, 00:09:41.999 "data_size": 63488 00:09:41.999 }, 00:09:41.999 { 00:09:41.999 "name": "pt4", 00:09:41.999 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:41.999 "is_configured": true, 00:09:41.999 "data_offset": 2048, 00:09:41.999 "data_size": 63488 00:09:41.999 } 00:09:41.999 ] 00:09:41.999 } 00:09:41.999 } 00:09:41.999 }' 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:41.999 pt2 00:09:41.999 pt3 00:09:41.999 pt4' 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.999 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:42.000 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.000 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.000 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.000 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:42.260 [2024-10-15 01:10:54.734522] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=166ba11a-91fb-4380-ac70-6e58a936bf12 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 166ba11a-91fb-4380-ac70-6e58a936bf12 ']' 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.260 [2024-10-15 01:10:54.774199] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:42.260 [2024-10-15 01:10:54.774221] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.260 [2024-10-15 01:10:54.774289] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.260 [2024-10-15 01:10:54.774357] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.260 [2024-10-15 01:10:54.774367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:42.260 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.261 [2024-10-15 01:10:54.921949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:42.261 [2024-10-15 01:10:54.923777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:42.261 [2024-10-15 01:10:54.923826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:42.261 [2024-10-15 01:10:54.923854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:42.261 [2024-10-15 01:10:54.923897] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:42.261 [2024-10-15 01:10:54.923944] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:42.261 [2024-10-15 01:10:54.923966] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:42.261 [2024-10-15 01:10:54.923981] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:42.261 [2024-10-15 01:10:54.923994] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:42.261 [2024-10-15 01:10:54.924003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:09:42.261 request: 00:09:42.261 { 00:09:42.261 "name": "raid_bdev1", 00:09:42.261 "raid_level": "raid0", 00:09:42.261 "base_bdevs": [ 00:09:42.261 "malloc1", 00:09:42.261 "malloc2", 00:09:42.261 "malloc3", 00:09:42.261 "malloc4" 00:09:42.261 ], 00:09:42.261 "strip_size_kb": 64, 00:09:42.261 "superblock": false, 00:09:42.261 "method": "bdev_raid_create", 00:09:42.261 "req_id": 1 00:09:42.261 } 00:09:42.261 Got JSON-RPC error response 00:09:42.261 response: 00:09:42.261 { 00:09:42.261 "code": -17, 00:09:42.261 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:42.261 } 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.261 [2024-10-15 01:10:54.977809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:42.261 [2024-10-15 01:10:54.977856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.261 [2024-10-15 01:10:54.977879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:42.261 [2024-10-15 01:10:54.977887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.261 [2024-10-15 01:10:54.980004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.261 [2024-10-15 01:10:54.980037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:42.261 [2024-10-15 01:10:54.980101] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:42.261 [2024-10-15 01:10:54.980137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:42.261 pt1 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.261 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:42.521 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.521 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.521 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.521 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.521 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.521 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.521 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.521 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.521 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.521 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.521 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.521 01:10:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.521 01:10:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.521 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.521 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.521 "name": "raid_bdev1", 00:09:42.521 "uuid": "166ba11a-91fb-4380-ac70-6e58a936bf12", 00:09:42.521 "strip_size_kb": 64, 00:09:42.521 "state": "configuring", 00:09:42.521 "raid_level": "raid0", 00:09:42.521 "superblock": true, 00:09:42.521 "num_base_bdevs": 4, 00:09:42.521 "num_base_bdevs_discovered": 1, 00:09:42.521 "num_base_bdevs_operational": 4, 00:09:42.521 "base_bdevs_list": [ 00:09:42.521 { 00:09:42.521 "name": "pt1", 00:09:42.521 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:42.521 "is_configured": true, 00:09:42.521 "data_offset": 2048, 00:09:42.521 "data_size": 63488 00:09:42.521 }, 00:09:42.521 { 00:09:42.521 "name": null, 00:09:42.521 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.521 "is_configured": false, 00:09:42.521 "data_offset": 2048, 00:09:42.521 "data_size": 63488 00:09:42.521 }, 00:09:42.521 { 00:09:42.521 "name": null, 00:09:42.521 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.521 "is_configured": false, 00:09:42.521 "data_offset": 2048, 00:09:42.521 "data_size": 63488 00:09:42.521 }, 00:09:42.521 { 00:09:42.521 "name": null, 00:09:42.521 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:42.521 "is_configured": false, 00:09:42.521 "data_offset": 2048, 00:09:42.521 "data_size": 63488 00:09:42.521 } 00:09:42.521 ] 00:09:42.521 }' 00:09:42.521 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.521 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.781 [2024-10-15 01:10:55.437050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:42.781 [2024-10-15 01:10:55.437114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.781 [2024-10-15 01:10:55.437133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:42.781 [2024-10-15 01:10:55.437143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.781 [2024-10-15 01:10:55.437571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.781 [2024-10-15 01:10:55.437601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:42.781 [2024-10-15 01:10:55.437683] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:42.781 [2024-10-15 01:10:55.437706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:42.781 pt2 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.781 [2024-10-15 01:10:55.449065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.781 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.041 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.041 "name": "raid_bdev1", 00:09:43.041 "uuid": "166ba11a-91fb-4380-ac70-6e58a936bf12", 00:09:43.041 "strip_size_kb": 64, 00:09:43.041 "state": "configuring", 00:09:43.041 "raid_level": "raid0", 00:09:43.041 "superblock": true, 00:09:43.041 "num_base_bdevs": 4, 00:09:43.041 "num_base_bdevs_discovered": 1, 00:09:43.041 "num_base_bdevs_operational": 4, 00:09:43.041 "base_bdevs_list": [ 00:09:43.041 { 00:09:43.041 "name": "pt1", 00:09:43.041 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:43.041 "is_configured": true, 00:09:43.041 "data_offset": 2048, 00:09:43.041 "data_size": 63488 00:09:43.041 }, 00:09:43.041 { 00:09:43.041 "name": null, 00:09:43.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.041 "is_configured": false, 00:09:43.041 "data_offset": 0, 00:09:43.041 "data_size": 63488 00:09:43.041 }, 00:09:43.041 { 00:09:43.041 "name": null, 00:09:43.041 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.041 "is_configured": false, 00:09:43.041 "data_offset": 2048, 00:09:43.041 "data_size": 63488 00:09:43.041 }, 00:09:43.041 { 00:09:43.041 "name": null, 00:09:43.041 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:43.041 "is_configured": false, 00:09:43.041 "data_offset": 2048, 00:09:43.041 "data_size": 63488 00:09:43.041 } 00:09:43.041 ] 00:09:43.041 }' 00:09:43.041 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.041 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.301 [2024-10-15 01:10:55.948173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:43.301 [2024-10-15 01:10:55.948302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.301 [2024-10-15 01:10:55.948339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:43.301 [2024-10-15 01:10:55.948368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.301 [2024-10-15 01:10:55.948787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.301 [2024-10-15 01:10:55.948845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:43.301 [2024-10-15 01:10:55.948949] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:43.301 [2024-10-15 01:10:55.949000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:43.301 pt2 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.301 [2024-10-15 01:10:55.960110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:43.301 [2024-10-15 01:10:55.960210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.301 [2024-10-15 01:10:55.960246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:43.301 [2024-10-15 01:10:55.960302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.301 [2024-10-15 01:10:55.960644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.301 [2024-10-15 01:10:55.960699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:43.301 [2024-10-15 01:10:55.960781] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:43.301 [2024-10-15 01:10:55.960828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:43.301 pt3 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.301 [2024-10-15 01:10:55.972107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:43.301 [2024-10-15 01:10:55.972208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.301 [2024-10-15 01:10:55.972238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:43.301 [2024-10-15 01:10:55.972266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.301 [2024-10-15 01:10:55.972553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.301 [2024-10-15 01:10:55.972607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:43.301 [2024-10-15 01:10:55.972681] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:43.301 [2024-10-15 01:10:55.972723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:43.301 [2024-10-15 01:10:55.972837] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:43.301 [2024-10-15 01:10:55.972874] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:43.301 [2024-10-15 01:10:55.973106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:43.301 [2024-10-15 01:10:55.973258] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:43.301 [2024-10-15 01:10:55.973295] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:09:43.301 [2024-10-15 01:10:55.973426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.301 pt4 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.301 01:10:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.301 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.560 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.560 "name": "raid_bdev1", 00:09:43.560 "uuid": "166ba11a-91fb-4380-ac70-6e58a936bf12", 00:09:43.560 "strip_size_kb": 64, 00:09:43.560 "state": "online", 00:09:43.560 "raid_level": "raid0", 00:09:43.560 "superblock": true, 00:09:43.560 "num_base_bdevs": 4, 00:09:43.560 "num_base_bdevs_discovered": 4, 00:09:43.560 "num_base_bdevs_operational": 4, 00:09:43.560 "base_bdevs_list": [ 00:09:43.560 { 00:09:43.560 "name": "pt1", 00:09:43.560 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:43.560 "is_configured": true, 00:09:43.560 "data_offset": 2048, 00:09:43.560 "data_size": 63488 00:09:43.561 }, 00:09:43.561 { 00:09:43.561 "name": "pt2", 00:09:43.561 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.561 "is_configured": true, 00:09:43.561 "data_offset": 2048, 00:09:43.561 "data_size": 63488 00:09:43.561 }, 00:09:43.561 { 00:09:43.561 "name": "pt3", 00:09:43.561 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.561 "is_configured": true, 00:09:43.561 "data_offset": 2048, 00:09:43.561 "data_size": 63488 00:09:43.561 }, 00:09:43.561 { 00:09:43.561 "name": "pt4", 00:09:43.561 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:43.561 "is_configured": true, 00:09:43.561 "data_offset": 2048, 00:09:43.561 "data_size": 63488 00:09:43.561 } 00:09:43.561 ] 00:09:43.561 }' 00:09:43.561 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.561 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.820 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:43.820 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:43.820 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:43.820 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:43.820 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:43.820 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:43.820 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:43.820 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:43.820 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.820 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.820 [2024-10-15 01:10:56.403748] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.820 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.820 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:43.820 "name": "raid_bdev1", 00:09:43.820 "aliases": [ 00:09:43.820 "166ba11a-91fb-4380-ac70-6e58a936bf12" 00:09:43.820 ], 00:09:43.820 "product_name": "Raid Volume", 00:09:43.820 "block_size": 512, 00:09:43.820 "num_blocks": 253952, 00:09:43.820 "uuid": "166ba11a-91fb-4380-ac70-6e58a936bf12", 00:09:43.820 "assigned_rate_limits": { 00:09:43.820 "rw_ios_per_sec": 0, 00:09:43.820 "rw_mbytes_per_sec": 0, 00:09:43.820 "r_mbytes_per_sec": 0, 00:09:43.820 "w_mbytes_per_sec": 0 00:09:43.820 }, 00:09:43.820 "claimed": false, 00:09:43.820 "zoned": false, 00:09:43.820 "supported_io_types": { 00:09:43.820 "read": true, 00:09:43.820 "write": true, 00:09:43.820 "unmap": true, 00:09:43.820 "flush": true, 00:09:43.820 "reset": true, 00:09:43.820 "nvme_admin": false, 00:09:43.820 "nvme_io": false, 00:09:43.820 "nvme_io_md": false, 00:09:43.820 "write_zeroes": true, 00:09:43.820 "zcopy": false, 00:09:43.820 "get_zone_info": false, 00:09:43.820 "zone_management": false, 00:09:43.820 "zone_append": false, 00:09:43.820 "compare": false, 00:09:43.820 "compare_and_write": false, 00:09:43.820 "abort": false, 00:09:43.820 "seek_hole": false, 00:09:43.820 "seek_data": false, 00:09:43.820 "copy": false, 00:09:43.820 "nvme_iov_md": false 00:09:43.820 }, 00:09:43.820 "memory_domains": [ 00:09:43.820 { 00:09:43.820 "dma_device_id": "system", 00:09:43.820 "dma_device_type": 1 00:09:43.820 }, 00:09:43.820 { 00:09:43.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.820 "dma_device_type": 2 00:09:43.820 }, 00:09:43.820 { 00:09:43.820 "dma_device_id": "system", 00:09:43.820 "dma_device_type": 1 00:09:43.820 }, 00:09:43.820 { 00:09:43.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.820 "dma_device_type": 2 00:09:43.820 }, 00:09:43.820 { 00:09:43.820 "dma_device_id": "system", 00:09:43.820 "dma_device_type": 1 00:09:43.820 }, 00:09:43.820 { 00:09:43.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.820 "dma_device_type": 2 00:09:43.820 }, 00:09:43.820 { 00:09:43.820 "dma_device_id": "system", 00:09:43.820 "dma_device_type": 1 00:09:43.820 }, 00:09:43.820 { 00:09:43.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.820 "dma_device_type": 2 00:09:43.820 } 00:09:43.820 ], 00:09:43.820 "driver_specific": { 00:09:43.820 "raid": { 00:09:43.820 "uuid": "166ba11a-91fb-4380-ac70-6e58a936bf12", 00:09:43.820 "strip_size_kb": 64, 00:09:43.820 "state": "online", 00:09:43.820 "raid_level": "raid0", 00:09:43.820 "superblock": true, 00:09:43.820 "num_base_bdevs": 4, 00:09:43.820 "num_base_bdevs_discovered": 4, 00:09:43.820 "num_base_bdevs_operational": 4, 00:09:43.820 "base_bdevs_list": [ 00:09:43.820 { 00:09:43.820 "name": "pt1", 00:09:43.820 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:43.820 "is_configured": true, 00:09:43.820 "data_offset": 2048, 00:09:43.820 "data_size": 63488 00:09:43.820 }, 00:09:43.820 { 00:09:43.820 "name": "pt2", 00:09:43.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.820 "is_configured": true, 00:09:43.820 "data_offset": 2048, 00:09:43.820 "data_size": 63488 00:09:43.820 }, 00:09:43.820 { 00:09:43.820 "name": "pt3", 00:09:43.820 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.820 "is_configured": true, 00:09:43.820 "data_offset": 2048, 00:09:43.820 "data_size": 63488 00:09:43.820 }, 00:09:43.820 { 00:09:43.820 "name": "pt4", 00:09:43.820 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:43.820 "is_configured": true, 00:09:43.820 "data_offset": 2048, 00:09:43.820 "data_size": 63488 00:09:43.820 } 00:09:43.820 ] 00:09:43.820 } 00:09:43.820 } 00:09:43.820 }' 00:09:43.820 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:43.820 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:43.820 pt2 00:09:43.820 pt3 00:09:43.820 pt4' 00:09:43.821 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.821 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:43.821 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.821 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.821 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:43.821 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.821 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.821 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.081 [2024-10-15 01:10:56.731104] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 166ba11a-91fb-4380-ac70-6e58a936bf12 '!=' 166ba11a-91fb-4380-ac70-6e58a936bf12 ']' 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81390 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81390 ']' 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81390 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:44.081 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81390 00:09:44.341 killing process with pid 81390 00:09:44.341 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:44.341 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:44.341 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81390' 00:09:44.341 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 81390 00:09:44.341 [2024-10-15 01:10:56.814453] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:44.341 [2024-10-15 01:10:56.814537] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.341 [2024-10-15 01:10:56.814605] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.341 [2024-10-15 01:10:56.814616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:44.341 01:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 81390 00:09:44.341 [2024-10-15 01:10:56.857761] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.601 01:10:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:44.601 00:09:44.601 real 0m4.100s 00:09:44.601 user 0m6.517s 00:09:44.601 sys 0m0.863s 00:09:44.601 01:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.601 ************************************ 00:09:44.601 END TEST raid_superblock_test 00:09:44.601 ************************************ 00:09:44.601 01:10:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.601 01:10:57 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:09:44.601 01:10:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:44.601 01:10:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.601 01:10:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.601 ************************************ 00:09:44.601 START TEST raid_read_error_test 00:09:44.601 ************************************ 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sMMJbcGF6V 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81644 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81644 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 81644 ']' 00:09:44.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:44.601 01:10:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.601 [2024-10-15 01:10:57.244251] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:09:44.601 [2024-10-15 01:10:57.244367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81644 ] 00:09:44.861 [2024-10-15 01:10:57.389552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.861 [2024-10-15 01:10:57.416184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.861 [2024-10-15 01:10:57.459309] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.861 [2024-10-15 01:10:57.459405] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.430 BaseBdev1_malloc 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.430 true 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.430 [2024-10-15 01:10:58.098152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:45.430 [2024-10-15 01:10:58.098256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.430 [2024-10-15 01:10:58.098282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:45.430 [2024-10-15 01:10:58.098299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.430 [2024-10-15 01:10:58.100440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.430 [2024-10-15 01:10:58.100478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:45.430 BaseBdev1 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.430 BaseBdev2_malloc 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.430 true 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.430 [2024-10-15 01:10:58.138739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:45.430 [2024-10-15 01:10:58.138788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.430 [2024-10-15 01:10:58.138806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:45.430 [2024-10-15 01:10:58.138823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.430 [2024-10-15 01:10:58.140929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.430 [2024-10-15 01:10:58.141010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:45.430 BaseBdev2 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.430 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.689 BaseBdev3_malloc 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.689 true 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.689 [2024-10-15 01:10:58.179461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:45.689 [2024-10-15 01:10:58.179553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.689 [2024-10-15 01:10:58.179596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:45.689 [2024-10-15 01:10:58.179611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.689 [2024-10-15 01:10:58.181687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.689 [2024-10-15 01:10:58.181723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:45.689 BaseBdev3 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.689 BaseBdev4_malloc 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.689 true 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.689 [2024-10-15 01:10:58.230719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:45.689 [2024-10-15 01:10:58.230786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.689 [2024-10-15 01:10:58.230809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:45.689 [2024-10-15 01:10:58.230817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.689 [2024-10-15 01:10:58.232897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.689 [2024-10-15 01:10:58.232944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:45.689 BaseBdev4 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.689 [2024-10-15 01:10:58.242751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.689 [2024-10-15 01:10:58.244616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.689 [2024-10-15 01:10:58.244692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:45.689 [2024-10-15 01:10:58.244756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:45.689 [2024-10-15 01:10:58.244956] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:09:45.689 [2024-10-15 01:10:58.244968] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:45.689 [2024-10-15 01:10:58.245244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:45.689 [2024-10-15 01:10:58.245391] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:09:45.689 [2024-10-15 01:10:58.245403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:09:45.689 [2024-10-15 01:10:58.245554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.689 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.690 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.690 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.690 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.690 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.690 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.690 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.690 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.690 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.690 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.690 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.690 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.690 "name": "raid_bdev1", 00:09:45.690 "uuid": "59ace302-1b32-4604-b569-7f8872786b78", 00:09:45.690 "strip_size_kb": 64, 00:09:45.690 "state": "online", 00:09:45.690 "raid_level": "raid0", 00:09:45.690 "superblock": true, 00:09:45.690 "num_base_bdevs": 4, 00:09:45.690 "num_base_bdevs_discovered": 4, 00:09:45.690 "num_base_bdevs_operational": 4, 00:09:45.690 "base_bdevs_list": [ 00:09:45.690 { 00:09:45.690 "name": "BaseBdev1", 00:09:45.690 "uuid": "4a9e85cf-8ffb-5274-b4e8-bd2b1f4c7829", 00:09:45.690 "is_configured": true, 00:09:45.690 "data_offset": 2048, 00:09:45.690 "data_size": 63488 00:09:45.690 }, 00:09:45.690 { 00:09:45.690 "name": "BaseBdev2", 00:09:45.690 "uuid": "9b207afc-aa85-5ed6-89ba-64dd38c13155", 00:09:45.690 "is_configured": true, 00:09:45.690 "data_offset": 2048, 00:09:45.690 "data_size": 63488 00:09:45.690 }, 00:09:45.690 { 00:09:45.690 "name": "BaseBdev3", 00:09:45.690 "uuid": "58702b82-42ad-504f-bcf9-087608709917", 00:09:45.690 "is_configured": true, 00:09:45.690 "data_offset": 2048, 00:09:45.690 "data_size": 63488 00:09:45.690 }, 00:09:45.690 { 00:09:45.690 "name": "BaseBdev4", 00:09:45.690 "uuid": "eae8260e-3f6f-5a07-b0c7-afea6d38504f", 00:09:45.690 "is_configured": true, 00:09:45.690 "data_offset": 2048, 00:09:45.690 "data_size": 63488 00:09:45.690 } 00:09:45.690 ] 00:09:45.690 }' 00:09:45.690 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.690 01:10:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.281 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:46.281 01:10:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:46.281 [2024-10-15 01:10:58.778280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.220 "name": "raid_bdev1", 00:09:47.220 "uuid": "59ace302-1b32-4604-b569-7f8872786b78", 00:09:47.220 "strip_size_kb": 64, 00:09:47.220 "state": "online", 00:09:47.220 "raid_level": "raid0", 00:09:47.220 "superblock": true, 00:09:47.220 "num_base_bdevs": 4, 00:09:47.220 "num_base_bdevs_discovered": 4, 00:09:47.220 "num_base_bdevs_operational": 4, 00:09:47.220 "base_bdevs_list": [ 00:09:47.220 { 00:09:47.220 "name": "BaseBdev1", 00:09:47.220 "uuid": "4a9e85cf-8ffb-5274-b4e8-bd2b1f4c7829", 00:09:47.220 "is_configured": true, 00:09:47.220 "data_offset": 2048, 00:09:47.220 "data_size": 63488 00:09:47.220 }, 00:09:47.220 { 00:09:47.220 "name": "BaseBdev2", 00:09:47.220 "uuid": "9b207afc-aa85-5ed6-89ba-64dd38c13155", 00:09:47.220 "is_configured": true, 00:09:47.220 "data_offset": 2048, 00:09:47.220 "data_size": 63488 00:09:47.220 }, 00:09:47.220 { 00:09:47.220 "name": "BaseBdev3", 00:09:47.220 "uuid": "58702b82-42ad-504f-bcf9-087608709917", 00:09:47.220 "is_configured": true, 00:09:47.220 "data_offset": 2048, 00:09:47.220 "data_size": 63488 00:09:47.220 }, 00:09:47.220 { 00:09:47.220 "name": "BaseBdev4", 00:09:47.220 "uuid": "eae8260e-3f6f-5a07-b0c7-afea6d38504f", 00:09:47.220 "is_configured": true, 00:09:47.220 "data_offset": 2048, 00:09:47.220 "data_size": 63488 00:09:47.220 } 00:09:47.220 ] 00:09:47.220 }' 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.220 01:10:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.480 01:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:47.480 01:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.480 01:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.480 [2024-10-15 01:11:00.182488] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:47.480 [2024-10-15 01:11:00.182518] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.480 [2024-10-15 01:11:00.185057] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.480 [2024-10-15 01:11:00.185198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.480 [2024-10-15 01:11:00.185265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.480 [2024-10-15 01:11:00.185275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:09:47.480 { 00:09:47.480 "results": [ 00:09:47.480 { 00:09:47.480 "job": "raid_bdev1", 00:09:47.480 "core_mask": "0x1", 00:09:47.480 "workload": "randrw", 00:09:47.480 "percentage": 50, 00:09:47.480 "status": "finished", 00:09:47.480 "queue_depth": 1, 00:09:47.480 "io_size": 131072, 00:09:47.480 "runtime": 1.404908, 00:09:47.480 "iops": 16489.33595651815, 00:09:47.480 "mibps": 2061.1669945647686, 00:09:47.480 "io_failed": 1, 00:09:47.480 "io_timeout": 0, 00:09:47.480 "avg_latency_us": 84.15525004226951, 00:09:47.480 "min_latency_us": 25.2646288209607, 00:09:47.480 "max_latency_us": 1459.5353711790392 00:09:47.480 } 00:09:47.480 ], 00:09:47.480 "core_count": 1 00:09:47.480 } 00:09:47.480 01:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.480 01:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81644 00:09:47.480 01:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 81644 ']' 00:09:47.480 01:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 81644 00:09:47.480 01:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:47.480 01:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:47.480 01:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81644 00:09:47.740 01:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:47.740 01:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:47.740 01:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81644' 00:09:47.740 killing process with pid 81644 00:09:47.740 01:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 81644 00:09:47.740 [2024-10-15 01:11:00.232253] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.740 01:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 81644 00:09:47.740 [2024-10-15 01:11:00.267086] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:48.001 01:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sMMJbcGF6V 00:09:48.001 01:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:48.001 01:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:48.001 01:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:48.001 01:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:48.001 01:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:48.001 01:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:48.001 01:11:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:48.001 ************************************ 00:09:48.001 END TEST raid_read_error_test 00:09:48.001 ************************************ 00:09:48.001 00:09:48.001 real 0m3.335s 00:09:48.001 user 0m4.241s 00:09:48.001 sys 0m0.520s 00:09:48.001 01:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:48.001 01:11:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.001 01:11:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:09:48.001 01:11:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:48.001 01:11:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.001 01:11:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.001 ************************************ 00:09:48.001 START TEST raid_write_error_test 00:09:48.001 ************************************ 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6rT8AK7m6a 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81773 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81773 00:09:48.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 81773 ']' 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:48.001 01:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.001 [2024-10-15 01:11:00.654254] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:09:48.001 [2024-10-15 01:11:00.654380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81773 ] 00:09:48.261 [2024-10-15 01:11:00.783052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.261 [2024-10-15 01:11:00.809197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.261 [2024-10-15 01:11:00.851746] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.261 [2024-10-15 01:11:00.851780] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.831 BaseBdev1_malloc 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.831 true 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.831 [2024-10-15 01:11:01.518342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:48.831 [2024-10-15 01:11:01.518398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.831 [2024-10-15 01:11:01.518418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:48.831 [2024-10-15 01:11:01.518427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.831 [2024-10-15 01:11:01.520551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.831 [2024-10-15 01:11:01.520588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:48.831 BaseBdev1 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.831 BaseBdev2_malloc 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.831 true 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.831 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.091 [2024-10-15 01:11:01.558801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:49.091 [2024-10-15 01:11:01.558850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.091 [2024-10-15 01:11:01.558887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:49.091 [2024-10-15 01:11:01.558904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.091 [2024-10-15 01:11:01.561030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.091 [2024-10-15 01:11:01.561066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:49.091 BaseBdev2 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.091 BaseBdev3_malloc 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.091 true 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.091 [2024-10-15 01:11:01.599439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:49.091 [2024-10-15 01:11:01.599488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.091 [2024-10-15 01:11:01.599527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:49.091 [2024-10-15 01:11:01.599536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.091 [2024-10-15 01:11:01.601582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.091 [2024-10-15 01:11:01.601687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:49.091 BaseBdev3 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.091 BaseBdev4_malloc 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.091 true 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.091 [2024-10-15 01:11:01.657741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:49.091 [2024-10-15 01:11:01.657871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.091 [2024-10-15 01:11:01.657914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:49.091 [2024-10-15 01:11:01.657929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.091 [2024-10-15 01:11:01.660594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.091 [2024-10-15 01:11:01.660635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:49.091 BaseBdev4 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.091 [2024-10-15 01:11:01.669719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.091 [2024-10-15 01:11:01.671506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.091 [2024-10-15 01:11:01.671580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.091 [2024-10-15 01:11:01.671650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:49.091 [2024-10-15 01:11:01.671874] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:09:49.091 [2024-10-15 01:11:01.671892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:49.091 [2024-10-15 01:11:01.672130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:49.091 [2024-10-15 01:11:01.672295] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:09:49.091 [2024-10-15 01:11:01.672308] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:09:49.091 [2024-10-15 01:11:01.672421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.091 "name": "raid_bdev1", 00:09:49.091 "uuid": "e9719d10-4008-420f-b807-a99019c3e3f6", 00:09:49.091 "strip_size_kb": 64, 00:09:49.091 "state": "online", 00:09:49.091 "raid_level": "raid0", 00:09:49.091 "superblock": true, 00:09:49.091 "num_base_bdevs": 4, 00:09:49.091 "num_base_bdevs_discovered": 4, 00:09:49.091 "num_base_bdevs_operational": 4, 00:09:49.091 "base_bdevs_list": [ 00:09:49.091 { 00:09:49.091 "name": "BaseBdev1", 00:09:49.091 "uuid": "b89f930e-f5c5-5078-8bbb-de25b9e64579", 00:09:49.091 "is_configured": true, 00:09:49.091 "data_offset": 2048, 00:09:49.091 "data_size": 63488 00:09:49.091 }, 00:09:49.091 { 00:09:49.091 "name": "BaseBdev2", 00:09:49.091 "uuid": "e4a9223e-75c7-5446-b357-6a3c0b4d93b0", 00:09:49.091 "is_configured": true, 00:09:49.091 "data_offset": 2048, 00:09:49.091 "data_size": 63488 00:09:49.091 }, 00:09:49.091 { 00:09:49.091 "name": "BaseBdev3", 00:09:49.091 "uuid": "e1120be3-66e8-53aa-aad8-0d36a7308eef", 00:09:49.091 "is_configured": true, 00:09:49.091 "data_offset": 2048, 00:09:49.091 "data_size": 63488 00:09:49.091 }, 00:09:49.091 { 00:09:49.091 "name": "BaseBdev4", 00:09:49.091 "uuid": "dd1a0fdb-388f-5f58-9c48-ea4047a7ea68", 00:09:49.091 "is_configured": true, 00:09:49.091 "data_offset": 2048, 00:09:49.091 "data_size": 63488 00:09:49.091 } 00:09:49.091 ] 00:09:49.091 }' 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.091 01:11:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.658 01:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:49.659 01:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:49.659 [2024-10-15 01:11:02.217193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:09:50.597 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:50.597 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.597 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.597 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.597 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:50.597 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:50.597 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:50.597 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:50.598 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.598 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.598 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.598 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.598 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.598 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.598 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.598 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.598 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.598 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.598 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.598 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.598 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.598 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.598 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.598 "name": "raid_bdev1", 00:09:50.598 "uuid": "e9719d10-4008-420f-b807-a99019c3e3f6", 00:09:50.598 "strip_size_kb": 64, 00:09:50.598 "state": "online", 00:09:50.598 "raid_level": "raid0", 00:09:50.598 "superblock": true, 00:09:50.598 "num_base_bdevs": 4, 00:09:50.598 "num_base_bdevs_discovered": 4, 00:09:50.598 "num_base_bdevs_operational": 4, 00:09:50.598 "base_bdevs_list": [ 00:09:50.598 { 00:09:50.598 "name": "BaseBdev1", 00:09:50.598 "uuid": "b89f930e-f5c5-5078-8bbb-de25b9e64579", 00:09:50.598 "is_configured": true, 00:09:50.598 "data_offset": 2048, 00:09:50.598 "data_size": 63488 00:09:50.598 }, 00:09:50.598 { 00:09:50.598 "name": "BaseBdev2", 00:09:50.598 "uuid": "e4a9223e-75c7-5446-b357-6a3c0b4d93b0", 00:09:50.598 "is_configured": true, 00:09:50.598 "data_offset": 2048, 00:09:50.598 "data_size": 63488 00:09:50.598 }, 00:09:50.598 { 00:09:50.598 "name": "BaseBdev3", 00:09:50.598 "uuid": "e1120be3-66e8-53aa-aad8-0d36a7308eef", 00:09:50.598 "is_configured": true, 00:09:50.598 "data_offset": 2048, 00:09:50.598 "data_size": 63488 00:09:50.598 }, 00:09:50.598 { 00:09:50.598 "name": "BaseBdev4", 00:09:50.598 "uuid": "dd1a0fdb-388f-5f58-9c48-ea4047a7ea68", 00:09:50.598 "is_configured": true, 00:09:50.598 "data_offset": 2048, 00:09:50.598 "data_size": 63488 00:09:50.598 } 00:09:50.598 ] 00:09:50.598 }' 00:09:50.598 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.598 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.166 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:51.166 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.166 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.166 [2024-10-15 01:11:03.616891] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.166 [2024-10-15 01:11:03.616990] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.166 [2024-10-15 01:11:03.619400] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.166 [2024-10-15 01:11:03.619457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.166 [2024-10-15 01:11:03.619509] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.166 [2024-10-15 01:11:03.619518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:09:51.166 { 00:09:51.166 "results": [ 00:09:51.166 { 00:09:51.166 "job": "raid_bdev1", 00:09:51.166 "core_mask": "0x1", 00:09:51.166 "workload": "randrw", 00:09:51.166 "percentage": 50, 00:09:51.166 "status": "finished", 00:09:51.166 "queue_depth": 1, 00:09:51.166 "io_size": 131072, 00:09:51.166 "runtime": 1.400579, 00:09:51.166 "iops": 16481.041055163616, 00:09:51.166 "mibps": 2060.130131895452, 00:09:51.166 "io_failed": 1, 00:09:51.166 "io_timeout": 0, 00:09:51.166 "avg_latency_us": 84.02435002901875, 00:09:51.166 "min_latency_us": 24.929257641921396, 00:09:51.166 "max_latency_us": 1445.2262008733624 00:09:51.166 } 00:09:51.166 ], 00:09:51.166 "core_count": 1 00:09:51.166 } 00:09:51.166 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.166 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81773 00:09:51.166 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 81773 ']' 00:09:51.166 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 81773 00:09:51.166 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:51.166 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:51.166 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81773 00:09:51.166 killing process with pid 81773 00:09:51.166 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:51.166 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:51.166 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81773' 00:09:51.166 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 81773 00:09:51.166 [2024-10-15 01:11:03.667246] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.166 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 81773 00:09:51.166 [2024-10-15 01:11:03.702345] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.426 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6rT8AK7m6a 00:09:51.426 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:51.426 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:51.426 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:51.426 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:51.426 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:51.426 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:51.426 01:11:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:51.426 00:09:51.426 real 0m3.365s 00:09:51.426 user 0m4.284s 00:09:51.426 sys 0m0.528s 00:09:51.426 ************************************ 00:09:51.426 END TEST raid_write_error_test 00:09:51.426 ************************************ 00:09:51.426 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.426 01:11:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.426 01:11:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:51.426 01:11:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:09:51.426 01:11:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:51.426 01:11:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.426 01:11:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:51.426 ************************************ 00:09:51.426 START TEST raid_state_function_test 00:09:51.426 ************************************ 00:09:51.426 01:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:09:51.426 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:51.426 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:51.426 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=81905 00:09:51.427 01:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:51.427 01:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81905' 00:09:51.427 Process raid pid: 81905 00:09:51.427 01:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 81905 00:09:51.427 01:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 81905 ']' 00:09:51.427 01:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.427 01:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.427 01:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.427 01:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.427 01:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.427 [2024-10-15 01:11:04.077049] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:09:51.427 [2024-10-15 01:11:04.077281] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.686 [2024-10-15 01:11:04.222525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.686 [2024-10-15 01:11:04.250158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.686 [2024-10-15 01:11:04.293593] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.686 [2024-10-15 01:11:04.293705] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.256 01:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.256 01:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:52.256 01:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:52.256 01:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.256 01:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.256 [2024-10-15 01:11:04.967945] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:52.256 [2024-10-15 01:11:04.968083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:52.256 [2024-10-15 01:11:04.968117] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:52.256 [2024-10-15 01:11:04.968144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:52.256 [2024-10-15 01:11:04.968163] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:52.256 [2024-10-15 01:11:04.968203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:52.256 [2024-10-15 01:11:04.968223] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:52.256 [2024-10-15 01:11:04.968262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:52.256 01:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.256 01:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:52.256 01:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.256 01:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.256 01:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.256 01:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.256 01:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.256 01:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.256 01:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.256 01:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.256 01:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.515 01:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.515 01:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.515 01:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.515 01:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.515 01:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.515 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.515 "name": "Existed_Raid", 00:09:52.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.515 "strip_size_kb": 64, 00:09:52.515 "state": "configuring", 00:09:52.515 "raid_level": "concat", 00:09:52.515 "superblock": false, 00:09:52.515 "num_base_bdevs": 4, 00:09:52.515 "num_base_bdevs_discovered": 0, 00:09:52.515 "num_base_bdevs_operational": 4, 00:09:52.515 "base_bdevs_list": [ 00:09:52.515 { 00:09:52.515 "name": "BaseBdev1", 00:09:52.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.515 "is_configured": false, 00:09:52.515 "data_offset": 0, 00:09:52.515 "data_size": 0 00:09:52.515 }, 00:09:52.515 { 00:09:52.515 "name": "BaseBdev2", 00:09:52.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.515 "is_configured": false, 00:09:52.515 "data_offset": 0, 00:09:52.515 "data_size": 0 00:09:52.515 }, 00:09:52.515 { 00:09:52.515 "name": "BaseBdev3", 00:09:52.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.515 "is_configured": false, 00:09:52.515 "data_offset": 0, 00:09:52.515 "data_size": 0 00:09:52.515 }, 00:09:52.515 { 00:09:52.515 "name": "BaseBdev4", 00:09:52.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.515 "is_configured": false, 00:09:52.515 "data_offset": 0, 00:09:52.515 "data_size": 0 00:09:52.515 } 00:09:52.515 ] 00:09:52.515 }' 00:09:52.515 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.515 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.774 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:52.774 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.774 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.774 [2024-10-15 01:11:05.470982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:52.774 [2024-10-15 01:11:05.471027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:52.774 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.774 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:52.774 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.774 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.774 [2024-10-15 01:11:05.482980] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:52.774 [2024-10-15 01:11:05.483023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:52.774 [2024-10-15 01:11:05.483032] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:52.774 [2024-10-15 01:11:05.483041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:52.774 [2024-10-15 01:11:05.483047] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:52.774 [2024-10-15 01:11:05.483055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:52.774 [2024-10-15 01:11:05.483060] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:52.774 [2024-10-15 01:11:05.483068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:52.774 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.774 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:52.774 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.774 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.034 [2024-10-15 01:11:05.503945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.034 BaseBdev1 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.034 [ 00:09:53.034 { 00:09:53.034 "name": "BaseBdev1", 00:09:53.034 "aliases": [ 00:09:53.034 "80222f58-a747-44e9-b4b6-1420ed523ec0" 00:09:53.034 ], 00:09:53.034 "product_name": "Malloc disk", 00:09:53.034 "block_size": 512, 00:09:53.034 "num_blocks": 65536, 00:09:53.034 "uuid": "80222f58-a747-44e9-b4b6-1420ed523ec0", 00:09:53.034 "assigned_rate_limits": { 00:09:53.034 "rw_ios_per_sec": 0, 00:09:53.034 "rw_mbytes_per_sec": 0, 00:09:53.034 "r_mbytes_per_sec": 0, 00:09:53.034 "w_mbytes_per_sec": 0 00:09:53.034 }, 00:09:53.034 "claimed": true, 00:09:53.034 "claim_type": "exclusive_write", 00:09:53.034 "zoned": false, 00:09:53.034 "supported_io_types": { 00:09:53.034 "read": true, 00:09:53.034 "write": true, 00:09:53.034 "unmap": true, 00:09:53.034 "flush": true, 00:09:53.034 "reset": true, 00:09:53.034 "nvme_admin": false, 00:09:53.034 "nvme_io": false, 00:09:53.034 "nvme_io_md": false, 00:09:53.034 "write_zeroes": true, 00:09:53.034 "zcopy": true, 00:09:53.034 "get_zone_info": false, 00:09:53.034 "zone_management": false, 00:09:53.034 "zone_append": false, 00:09:53.034 "compare": false, 00:09:53.034 "compare_and_write": false, 00:09:53.034 "abort": true, 00:09:53.034 "seek_hole": false, 00:09:53.034 "seek_data": false, 00:09:53.034 "copy": true, 00:09:53.034 "nvme_iov_md": false 00:09:53.034 }, 00:09:53.034 "memory_domains": [ 00:09:53.034 { 00:09:53.034 "dma_device_id": "system", 00:09:53.034 "dma_device_type": 1 00:09:53.034 }, 00:09:53.034 { 00:09:53.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.034 "dma_device_type": 2 00:09:53.034 } 00:09:53.034 ], 00:09:53.034 "driver_specific": {} 00:09:53.034 } 00:09:53.034 ] 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.034 "name": "Existed_Raid", 00:09:53.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.034 "strip_size_kb": 64, 00:09:53.034 "state": "configuring", 00:09:53.034 "raid_level": "concat", 00:09:53.034 "superblock": false, 00:09:53.034 "num_base_bdevs": 4, 00:09:53.034 "num_base_bdevs_discovered": 1, 00:09:53.034 "num_base_bdevs_operational": 4, 00:09:53.034 "base_bdevs_list": [ 00:09:53.034 { 00:09:53.034 "name": "BaseBdev1", 00:09:53.034 "uuid": "80222f58-a747-44e9-b4b6-1420ed523ec0", 00:09:53.034 "is_configured": true, 00:09:53.034 "data_offset": 0, 00:09:53.034 "data_size": 65536 00:09:53.034 }, 00:09:53.034 { 00:09:53.034 "name": "BaseBdev2", 00:09:53.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.034 "is_configured": false, 00:09:53.034 "data_offset": 0, 00:09:53.034 "data_size": 0 00:09:53.034 }, 00:09:53.034 { 00:09:53.034 "name": "BaseBdev3", 00:09:53.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.034 "is_configured": false, 00:09:53.034 "data_offset": 0, 00:09:53.034 "data_size": 0 00:09:53.034 }, 00:09:53.034 { 00:09:53.034 "name": "BaseBdev4", 00:09:53.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.034 "is_configured": false, 00:09:53.034 "data_offset": 0, 00:09:53.034 "data_size": 0 00:09:53.034 } 00:09:53.034 ] 00:09:53.034 }' 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.034 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.294 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:53.294 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.294 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.294 [2024-10-15 01:11:05.983242] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.294 [2024-10-15 01:11:05.983301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:53.294 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.294 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:53.294 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.294 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.294 [2024-10-15 01:11:05.995283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.294 [2024-10-15 01:11:05.997138] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.294 [2024-10-15 01:11:05.997189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.294 [2024-10-15 01:11:05.997199] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:53.294 [2024-10-15 01:11:05.997207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:53.294 [2024-10-15 01:11:05.997213] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:53.294 [2024-10-15 01:11:05.997221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:53.294 01:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.294 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:53.294 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:53.294 01:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:53.294 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.294 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.294 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.294 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.294 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.294 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.294 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.294 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.294 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.294 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.294 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.294 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.294 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.554 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.554 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.554 "name": "Existed_Raid", 00:09:53.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.554 "strip_size_kb": 64, 00:09:53.554 "state": "configuring", 00:09:53.554 "raid_level": "concat", 00:09:53.554 "superblock": false, 00:09:53.554 "num_base_bdevs": 4, 00:09:53.554 "num_base_bdevs_discovered": 1, 00:09:53.554 "num_base_bdevs_operational": 4, 00:09:53.554 "base_bdevs_list": [ 00:09:53.554 { 00:09:53.554 "name": "BaseBdev1", 00:09:53.554 "uuid": "80222f58-a747-44e9-b4b6-1420ed523ec0", 00:09:53.554 "is_configured": true, 00:09:53.554 "data_offset": 0, 00:09:53.554 "data_size": 65536 00:09:53.554 }, 00:09:53.554 { 00:09:53.554 "name": "BaseBdev2", 00:09:53.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.554 "is_configured": false, 00:09:53.554 "data_offset": 0, 00:09:53.554 "data_size": 0 00:09:53.554 }, 00:09:53.554 { 00:09:53.554 "name": "BaseBdev3", 00:09:53.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.554 "is_configured": false, 00:09:53.554 "data_offset": 0, 00:09:53.554 "data_size": 0 00:09:53.554 }, 00:09:53.554 { 00:09:53.554 "name": "BaseBdev4", 00:09:53.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.554 "is_configured": false, 00:09:53.554 "data_offset": 0, 00:09:53.554 "data_size": 0 00:09:53.554 } 00:09:53.554 ] 00:09:53.554 }' 00:09:53.554 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.555 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.815 [2024-10-15 01:11:06.417511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.815 BaseBdev2 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.815 [ 00:09:53.815 { 00:09:53.815 "name": "BaseBdev2", 00:09:53.815 "aliases": [ 00:09:53.815 "84cfdc1c-0575-4dba-8bad-2d999962d97f" 00:09:53.815 ], 00:09:53.815 "product_name": "Malloc disk", 00:09:53.815 "block_size": 512, 00:09:53.815 "num_blocks": 65536, 00:09:53.815 "uuid": "84cfdc1c-0575-4dba-8bad-2d999962d97f", 00:09:53.815 "assigned_rate_limits": { 00:09:53.815 "rw_ios_per_sec": 0, 00:09:53.815 "rw_mbytes_per_sec": 0, 00:09:53.815 "r_mbytes_per_sec": 0, 00:09:53.815 "w_mbytes_per_sec": 0 00:09:53.815 }, 00:09:53.815 "claimed": true, 00:09:53.815 "claim_type": "exclusive_write", 00:09:53.815 "zoned": false, 00:09:53.815 "supported_io_types": { 00:09:53.815 "read": true, 00:09:53.815 "write": true, 00:09:53.815 "unmap": true, 00:09:53.815 "flush": true, 00:09:53.815 "reset": true, 00:09:53.815 "nvme_admin": false, 00:09:53.815 "nvme_io": false, 00:09:53.815 "nvme_io_md": false, 00:09:53.815 "write_zeroes": true, 00:09:53.815 "zcopy": true, 00:09:53.815 "get_zone_info": false, 00:09:53.815 "zone_management": false, 00:09:53.815 "zone_append": false, 00:09:53.815 "compare": false, 00:09:53.815 "compare_and_write": false, 00:09:53.815 "abort": true, 00:09:53.815 "seek_hole": false, 00:09:53.815 "seek_data": false, 00:09:53.815 "copy": true, 00:09:53.815 "nvme_iov_md": false 00:09:53.815 }, 00:09:53.815 "memory_domains": [ 00:09:53.815 { 00:09:53.815 "dma_device_id": "system", 00:09:53.815 "dma_device_type": 1 00:09:53.815 }, 00:09:53.815 { 00:09:53.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.815 "dma_device_type": 2 00:09:53.815 } 00:09:53.815 ], 00:09:53.815 "driver_specific": {} 00:09:53.815 } 00:09:53.815 ] 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.815 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.815 "name": "Existed_Raid", 00:09:53.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.816 "strip_size_kb": 64, 00:09:53.816 "state": "configuring", 00:09:53.816 "raid_level": "concat", 00:09:53.816 "superblock": false, 00:09:53.816 "num_base_bdevs": 4, 00:09:53.816 "num_base_bdevs_discovered": 2, 00:09:53.816 "num_base_bdevs_operational": 4, 00:09:53.816 "base_bdevs_list": [ 00:09:53.816 { 00:09:53.816 "name": "BaseBdev1", 00:09:53.816 "uuid": "80222f58-a747-44e9-b4b6-1420ed523ec0", 00:09:53.816 "is_configured": true, 00:09:53.816 "data_offset": 0, 00:09:53.816 "data_size": 65536 00:09:53.816 }, 00:09:53.816 { 00:09:53.816 "name": "BaseBdev2", 00:09:53.816 "uuid": "84cfdc1c-0575-4dba-8bad-2d999962d97f", 00:09:53.816 "is_configured": true, 00:09:53.816 "data_offset": 0, 00:09:53.816 "data_size": 65536 00:09:53.816 }, 00:09:53.816 { 00:09:53.816 "name": "BaseBdev3", 00:09:53.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.816 "is_configured": false, 00:09:53.816 "data_offset": 0, 00:09:53.816 "data_size": 0 00:09:53.816 }, 00:09:53.816 { 00:09:53.816 "name": "BaseBdev4", 00:09:53.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.816 "is_configured": false, 00:09:53.816 "data_offset": 0, 00:09:53.816 "data_size": 0 00:09:53.816 } 00:09:53.816 ] 00:09:53.816 }' 00:09:53.816 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.816 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.401 [2024-10-15 01:11:06.919189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:54.401 BaseBdev3 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.401 [ 00:09:54.401 { 00:09:54.401 "name": "BaseBdev3", 00:09:54.401 "aliases": [ 00:09:54.401 "9c1b658f-52cd-45de-ad5c-f25e3947ef76" 00:09:54.401 ], 00:09:54.401 "product_name": "Malloc disk", 00:09:54.401 "block_size": 512, 00:09:54.401 "num_blocks": 65536, 00:09:54.401 "uuid": "9c1b658f-52cd-45de-ad5c-f25e3947ef76", 00:09:54.401 "assigned_rate_limits": { 00:09:54.401 "rw_ios_per_sec": 0, 00:09:54.401 "rw_mbytes_per_sec": 0, 00:09:54.401 "r_mbytes_per_sec": 0, 00:09:54.401 "w_mbytes_per_sec": 0 00:09:54.401 }, 00:09:54.401 "claimed": true, 00:09:54.401 "claim_type": "exclusive_write", 00:09:54.401 "zoned": false, 00:09:54.401 "supported_io_types": { 00:09:54.401 "read": true, 00:09:54.401 "write": true, 00:09:54.401 "unmap": true, 00:09:54.401 "flush": true, 00:09:54.401 "reset": true, 00:09:54.401 "nvme_admin": false, 00:09:54.401 "nvme_io": false, 00:09:54.401 "nvme_io_md": false, 00:09:54.401 "write_zeroes": true, 00:09:54.401 "zcopy": true, 00:09:54.401 "get_zone_info": false, 00:09:54.401 "zone_management": false, 00:09:54.401 "zone_append": false, 00:09:54.401 "compare": false, 00:09:54.401 "compare_and_write": false, 00:09:54.401 "abort": true, 00:09:54.401 "seek_hole": false, 00:09:54.401 "seek_data": false, 00:09:54.401 "copy": true, 00:09:54.401 "nvme_iov_md": false 00:09:54.401 }, 00:09:54.401 "memory_domains": [ 00:09:54.401 { 00:09:54.401 "dma_device_id": "system", 00:09:54.401 "dma_device_type": 1 00:09:54.401 }, 00:09:54.401 { 00:09:54.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.401 "dma_device_type": 2 00:09:54.401 } 00:09:54.401 ], 00:09:54.401 "driver_specific": {} 00:09:54.401 } 00:09:54.401 ] 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.401 01:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.401 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.401 "name": "Existed_Raid", 00:09:54.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.401 "strip_size_kb": 64, 00:09:54.401 "state": "configuring", 00:09:54.401 "raid_level": "concat", 00:09:54.401 "superblock": false, 00:09:54.401 "num_base_bdevs": 4, 00:09:54.401 "num_base_bdevs_discovered": 3, 00:09:54.401 "num_base_bdevs_operational": 4, 00:09:54.401 "base_bdevs_list": [ 00:09:54.401 { 00:09:54.402 "name": "BaseBdev1", 00:09:54.402 "uuid": "80222f58-a747-44e9-b4b6-1420ed523ec0", 00:09:54.402 "is_configured": true, 00:09:54.402 "data_offset": 0, 00:09:54.402 "data_size": 65536 00:09:54.402 }, 00:09:54.402 { 00:09:54.402 "name": "BaseBdev2", 00:09:54.402 "uuid": "84cfdc1c-0575-4dba-8bad-2d999962d97f", 00:09:54.402 "is_configured": true, 00:09:54.402 "data_offset": 0, 00:09:54.402 "data_size": 65536 00:09:54.402 }, 00:09:54.402 { 00:09:54.402 "name": "BaseBdev3", 00:09:54.402 "uuid": "9c1b658f-52cd-45de-ad5c-f25e3947ef76", 00:09:54.402 "is_configured": true, 00:09:54.402 "data_offset": 0, 00:09:54.402 "data_size": 65536 00:09:54.402 }, 00:09:54.402 { 00:09:54.402 "name": "BaseBdev4", 00:09:54.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.402 "is_configured": false, 00:09:54.402 "data_offset": 0, 00:09:54.402 "data_size": 0 00:09:54.402 } 00:09:54.402 ] 00:09:54.402 }' 00:09:54.402 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.402 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.661 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:54.662 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.662 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.662 [2024-10-15 01:11:07.377565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:54.662 [2024-10-15 01:11:07.377614] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:54.662 [2024-10-15 01:11:07.377623] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:54.662 [2024-10-15 01:11:07.377915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:54.662 [2024-10-15 01:11:07.378054] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:54.662 [2024-10-15 01:11:07.378071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:54.662 [2024-10-15 01:11:07.378275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.662 BaseBdev4 00:09:54.662 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.662 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:54.662 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:54.662 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:54.662 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:54.662 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:54.662 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:54.662 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:54.662 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.662 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.922 [ 00:09:54.922 { 00:09:54.922 "name": "BaseBdev4", 00:09:54.922 "aliases": [ 00:09:54.922 "2e4ba94a-a58c-48fe-8d69-7d2415377a77" 00:09:54.922 ], 00:09:54.922 "product_name": "Malloc disk", 00:09:54.922 "block_size": 512, 00:09:54.922 "num_blocks": 65536, 00:09:54.922 "uuid": "2e4ba94a-a58c-48fe-8d69-7d2415377a77", 00:09:54.922 "assigned_rate_limits": { 00:09:54.922 "rw_ios_per_sec": 0, 00:09:54.922 "rw_mbytes_per_sec": 0, 00:09:54.922 "r_mbytes_per_sec": 0, 00:09:54.922 "w_mbytes_per_sec": 0 00:09:54.922 }, 00:09:54.922 "claimed": true, 00:09:54.922 "claim_type": "exclusive_write", 00:09:54.922 "zoned": false, 00:09:54.922 "supported_io_types": { 00:09:54.922 "read": true, 00:09:54.922 "write": true, 00:09:54.922 "unmap": true, 00:09:54.922 "flush": true, 00:09:54.922 "reset": true, 00:09:54.922 "nvme_admin": false, 00:09:54.922 "nvme_io": false, 00:09:54.922 "nvme_io_md": false, 00:09:54.922 "write_zeroes": true, 00:09:54.922 "zcopy": true, 00:09:54.922 "get_zone_info": false, 00:09:54.922 "zone_management": false, 00:09:54.922 "zone_append": false, 00:09:54.922 "compare": false, 00:09:54.922 "compare_and_write": false, 00:09:54.922 "abort": true, 00:09:54.922 "seek_hole": false, 00:09:54.922 "seek_data": false, 00:09:54.922 "copy": true, 00:09:54.922 "nvme_iov_md": false 00:09:54.922 }, 00:09:54.922 "memory_domains": [ 00:09:54.922 { 00:09:54.922 "dma_device_id": "system", 00:09:54.922 "dma_device_type": 1 00:09:54.922 }, 00:09:54.922 { 00:09:54.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.922 "dma_device_type": 2 00:09:54.922 } 00:09:54.922 ], 00:09:54.922 "driver_specific": {} 00:09:54.922 } 00:09:54.922 ] 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.922 "name": "Existed_Raid", 00:09:54.922 "uuid": "3e98f5da-65a1-4ff7-a19a-8d88f9274255", 00:09:54.922 "strip_size_kb": 64, 00:09:54.922 "state": "online", 00:09:54.922 "raid_level": "concat", 00:09:54.922 "superblock": false, 00:09:54.922 "num_base_bdevs": 4, 00:09:54.922 "num_base_bdevs_discovered": 4, 00:09:54.922 "num_base_bdevs_operational": 4, 00:09:54.922 "base_bdevs_list": [ 00:09:54.922 { 00:09:54.922 "name": "BaseBdev1", 00:09:54.922 "uuid": "80222f58-a747-44e9-b4b6-1420ed523ec0", 00:09:54.922 "is_configured": true, 00:09:54.922 "data_offset": 0, 00:09:54.922 "data_size": 65536 00:09:54.922 }, 00:09:54.922 { 00:09:54.922 "name": "BaseBdev2", 00:09:54.922 "uuid": "84cfdc1c-0575-4dba-8bad-2d999962d97f", 00:09:54.922 "is_configured": true, 00:09:54.922 "data_offset": 0, 00:09:54.922 "data_size": 65536 00:09:54.922 }, 00:09:54.922 { 00:09:54.922 "name": "BaseBdev3", 00:09:54.922 "uuid": "9c1b658f-52cd-45de-ad5c-f25e3947ef76", 00:09:54.922 "is_configured": true, 00:09:54.922 "data_offset": 0, 00:09:54.922 "data_size": 65536 00:09:54.922 }, 00:09:54.922 { 00:09:54.922 "name": "BaseBdev4", 00:09:54.922 "uuid": "2e4ba94a-a58c-48fe-8d69-7d2415377a77", 00:09:54.922 "is_configured": true, 00:09:54.922 "data_offset": 0, 00:09:54.922 "data_size": 65536 00:09:54.922 } 00:09:54.922 ] 00:09:54.922 }' 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.922 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.182 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:55.182 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:55.182 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:55.182 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:55.182 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:55.182 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:55.182 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:55.182 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:55.182 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.182 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.182 [2024-10-15 01:11:07.841153] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.182 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.182 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:55.182 "name": "Existed_Raid", 00:09:55.182 "aliases": [ 00:09:55.182 "3e98f5da-65a1-4ff7-a19a-8d88f9274255" 00:09:55.182 ], 00:09:55.182 "product_name": "Raid Volume", 00:09:55.182 "block_size": 512, 00:09:55.182 "num_blocks": 262144, 00:09:55.182 "uuid": "3e98f5da-65a1-4ff7-a19a-8d88f9274255", 00:09:55.182 "assigned_rate_limits": { 00:09:55.182 "rw_ios_per_sec": 0, 00:09:55.182 "rw_mbytes_per_sec": 0, 00:09:55.182 "r_mbytes_per_sec": 0, 00:09:55.182 "w_mbytes_per_sec": 0 00:09:55.182 }, 00:09:55.182 "claimed": false, 00:09:55.182 "zoned": false, 00:09:55.182 "supported_io_types": { 00:09:55.182 "read": true, 00:09:55.182 "write": true, 00:09:55.182 "unmap": true, 00:09:55.182 "flush": true, 00:09:55.182 "reset": true, 00:09:55.182 "nvme_admin": false, 00:09:55.182 "nvme_io": false, 00:09:55.182 "nvme_io_md": false, 00:09:55.182 "write_zeroes": true, 00:09:55.182 "zcopy": false, 00:09:55.182 "get_zone_info": false, 00:09:55.182 "zone_management": false, 00:09:55.183 "zone_append": false, 00:09:55.183 "compare": false, 00:09:55.183 "compare_and_write": false, 00:09:55.183 "abort": false, 00:09:55.183 "seek_hole": false, 00:09:55.183 "seek_data": false, 00:09:55.183 "copy": false, 00:09:55.183 "nvme_iov_md": false 00:09:55.183 }, 00:09:55.183 "memory_domains": [ 00:09:55.183 { 00:09:55.183 "dma_device_id": "system", 00:09:55.183 "dma_device_type": 1 00:09:55.183 }, 00:09:55.183 { 00:09:55.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.183 "dma_device_type": 2 00:09:55.183 }, 00:09:55.183 { 00:09:55.183 "dma_device_id": "system", 00:09:55.183 "dma_device_type": 1 00:09:55.183 }, 00:09:55.183 { 00:09:55.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.183 "dma_device_type": 2 00:09:55.183 }, 00:09:55.183 { 00:09:55.183 "dma_device_id": "system", 00:09:55.183 "dma_device_type": 1 00:09:55.183 }, 00:09:55.183 { 00:09:55.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.183 "dma_device_type": 2 00:09:55.183 }, 00:09:55.183 { 00:09:55.183 "dma_device_id": "system", 00:09:55.183 "dma_device_type": 1 00:09:55.183 }, 00:09:55.183 { 00:09:55.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.183 "dma_device_type": 2 00:09:55.183 } 00:09:55.183 ], 00:09:55.183 "driver_specific": { 00:09:55.183 "raid": { 00:09:55.183 "uuid": "3e98f5da-65a1-4ff7-a19a-8d88f9274255", 00:09:55.183 "strip_size_kb": 64, 00:09:55.183 "state": "online", 00:09:55.183 "raid_level": "concat", 00:09:55.183 "superblock": false, 00:09:55.183 "num_base_bdevs": 4, 00:09:55.183 "num_base_bdevs_discovered": 4, 00:09:55.183 "num_base_bdevs_operational": 4, 00:09:55.183 "base_bdevs_list": [ 00:09:55.183 { 00:09:55.183 "name": "BaseBdev1", 00:09:55.183 "uuid": "80222f58-a747-44e9-b4b6-1420ed523ec0", 00:09:55.183 "is_configured": true, 00:09:55.183 "data_offset": 0, 00:09:55.183 "data_size": 65536 00:09:55.183 }, 00:09:55.183 { 00:09:55.183 "name": "BaseBdev2", 00:09:55.183 "uuid": "84cfdc1c-0575-4dba-8bad-2d999962d97f", 00:09:55.183 "is_configured": true, 00:09:55.183 "data_offset": 0, 00:09:55.183 "data_size": 65536 00:09:55.183 }, 00:09:55.183 { 00:09:55.183 "name": "BaseBdev3", 00:09:55.183 "uuid": "9c1b658f-52cd-45de-ad5c-f25e3947ef76", 00:09:55.183 "is_configured": true, 00:09:55.183 "data_offset": 0, 00:09:55.183 "data_size": 65536 00:09:55.183 }, 00:09:55.183 { 00:09:55.183 "name": "BaseBdev4", 00:09:55.183 "uuid": "2e4ba94a-a58c-48fe-8d69-7d2415377a77", 00:09:55.183 "is_configured": true, 00:09:55.183 "data_offset": 0, 00:09:55.183 "data_size": 65536 00:09:55.183 } 00:09:55.183 ] 00:09:55.183 } 00:09:55.183 } 00:09:55.183 }' 00:09:55.183 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:55.455 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:55.455 BaseBdev2 00:09:55.455 BaseBdev3 00:09:55.455 BaseBdev4' 00:09:55.455 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.455 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:55.455 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.455 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:55.455 01:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.455 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.455 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.456 01:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.456 [2024-10-15 01:11:08.152352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:55.456 [2024-10-15 01:11:08.152385] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.456 [2024-10-15 01:11:08.152444] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.456 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.724 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.724 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.724 "name": "Existed_Raid", 00:09:55.724 "uuid": "3e98f5da-65a1-4ff7-a19a-8d88f9274255", 00:09:55.724 "strip_size_kb": 64, 00:09:55.724 "state": "offline", 00:09:55.724 "raid_level": "concat", 00:09:55.724 "superblock": false, 00:09:55.724 "num_base_bdevs": 4, 00:09:55.724 "num_base_bdevs_discovered": 3, 00:09:55.724 "num_base_bdevs_operational": 3, 00:09:55.724 "base_bdevs_list": [ 00:09:55.724 { 00:09:55.724 "name": null, 00:09:55.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.724 "is_configured": false, 00:09:55.724 "data_offset": 0, 00:09:55.724 "data_size": 65536 00:09:55.724 }, 00:09:55.724 { 00:09:55.724 "name": "BaseBdev2", 00:09:55.724 "uuid": "84cfdc1c-0575-4dba-8bad-2d999962d97f", 00:09:55.724 "is_configured": true, 00:09:55.724 "data_offset": 0, 00:09:55.724 "data_size": 65536 00:09:55.724 }, 00:09:55.724 { 00:09:55.724 "name": "BaseBdev3", 00:09:55.724 "uuid": "9c1b658f-52cd-45de-ad5c-f25e3947ef76", 00:09:55.724 "is_configured": true, 00:09:55.724 "data_offset": 0, 00:09:55.724 "data_size": 65536 00:09:55.724 }, 00:09:55.724 { 00:09:55.724 "name": "BaseBdev4", 00:09:55.724 "uuid": "2e4ba94a-a58c-48fe-8d69-7d2415377a77", 00:09:55.724 "is_configured": true, 00:09:55.724 "data_offset": 0, 00:09:55.724 "data_size": 65536 00:09:55.724 } 00:09:55.724 ] 00:09:55.724 }' 00:09:55.724 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.724 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.983 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:55.983 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:55.983 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.983 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.983 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.983 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:55.983 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.983 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:55.983 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:55.983 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:55.983 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.983 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.983 [2024-10-15 01:11:08.606860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:55.983 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.984 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:55.984 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:55.984 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.984 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.984 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:55.984 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.984 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.984 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:55.984 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:55.984 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:55.984 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.984 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.984 [2024-10-15 01:11:08.677912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:55.984 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.984 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:55.984 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:55.984 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:55.984 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.984 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.984 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.244 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.244 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:56.244 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:56.244 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:56.244 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.244 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.244 [2024-10-15 01:11:08.736985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:56.244 [2024-10-15 01:11:08.737040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:56.244 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.244 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:56.244 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.244 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.244 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.244 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.244 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:56.244 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.244 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:56.244 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:56.244 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:56.244 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.245 BaseBdev2 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.245 [ 00:09:56.245 { 00:09:56.245 "name": "BaseBdev2", 00:09:56.245 "aliases": [ 00:09:56.245 "4254e6b4-0854-4c6a-98e5-3bcd6da8cea5" 00:09:56.245 ], 00:09:56.245 "product_name": "Malloc disk", 00:09:56.245 "block_size": 512, 00:09:56.245 "num_blocks": 65536, 00:09:56.245 "uuid": "4254e6b4-0854-4c6a-98e5-3bcd6da8cea5", 00:09:56.245 "assigned_rate_limits": { 00:09:56.245 "rw_ios_per_sec": 0, 00:09:56.245 "rw_mbytes_per_sec": 0, 00:09:56.245 "r_mbytes_per_sec": 0, 00:09:56.245 "w_mbytes_per_sec": 0 00:09:56.245 }, 00:09:56.245 "claimed": false, 00:09:56.245 "zoned": false, 00:09:56.245 "supported_io_types": { 00:09:56.245 "read": true, 00:09:56.245 "write": true, 00:09:56.245 "unmap": true, 00:09:56.245 "flush": true, 00:09:56.245 "reset": true, 00:09:56.245 "nvme_admin": false, 00:09:56.245 "nvme_io": false, 00:09:56.245 "nvme_io_md": false, 00:09:56.245 "write_zeroes": true, 00:09:56.245 "zcopy": true, 00:09:56.245 "get_zone_info": false, 00:09:56.245 "zone_management": false, 00:09:56.245 "zone_append": false, 00:09:56.245 "compare": false, 00:09:56.245 "compare_and_write": false, 00:09:56.245 "abort": true, 00:09:56.245 "seek_hole": false, 00:09:56.245 "seek_data": false, 00:09:56.245 "copy": true, 00:09:56.245 "nvme_iov_md": false 00:09:56.245 }, 00:09:56.245 "memory_domains": [ 00:09:56.245 { 00:09:56.245 "dma_device_id": "system", 00:09:56.245 "dma_device_type": 1 00:09:56.245 }, 00:09:56.245 { 00:09:56.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.245 "dma_device_type": 2 00:09:56.245 } 00:09:56.245 ], 00:09:56.245 "driver_specific": {} 00:09:56.245 } 00:09:56.245 ] 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.245 BaseBdev3 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.245 [ 00:09:56.245 { 00:09:56.245 "name": "BaseBdev3", 00:09:56.245 "aliases": [ 00:09:56.245 "ce9d7f6e-f29d-46bc-8fcf-92be7647659e" 00:09:56.245 ], 00:09:56.245 "product_name": "Malloc disk", 00:09:56.245 "block_size": 512, 00:09:56.245 "num_blocks": 65536, 00:09:56.245 "uuid": "ce9d7f6e-f29d-46bc-8fcf-92be7647659e", 00:09:56.245 "assigned_rate_limits": { 00:09:56.245 "rw_ios_per_sec": 0, 00:09:56.245 "rw_mbytes_per_sec": 0, 00:09:56.245 "r_mbytes_per_sec": 0, 00:09:56.245 "w_mbytes_per_sec": 0 00:09:56.245 }, 00:09:56.245 "claimed": false, 00:09:56.245 "zoned": false, 00:09:56.245 "supported_io_types": { 00:09:56.245 "read": true, 00:09:56.245 "write": true, 00:09:56.245 "unmap": true, 00:09:56.245 "flush": true, 00:09:56.245 "reset": true, 00:09:56.245 "nvme_admin": false, 00:09:56.245 "nvme_io": false, 00:09:56.245 "nvme_io_md": false, 00:09:56.245 "write_zeroes": true, 00:09:56.245 "zcopy": true, 00:09:56.245 "get_zone_info": false, 00:09:56.245 "zone_management": false, 00:09:56.245 "zone_append": false, 00:09:56.245 "compare": false, 00:09:56.245 "compare_and_write": false, 00:09:56.245 "abort": true, 00:09:56.245 "seek_hole": false, 00:09:56.245 "seek_data": false, 00:09:56.245 "copy": true, 00:09:56.245 "nvme_iov_md": false 00:09:56.245 }, 00:09:56.245 "memory_domains": [ 00:09:56.245 { 00:09:56.245 "dma_device_id": "system", 00:09:56.245 "dma_device_type": 1 00:09:56.245 }, 00:09:56.245 { 00:09:56.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.245 "dma_device_type": 2 00:09:56.245 } 00:09:56.245 ], 00:09:56.245 "driver_specific": {} 00:09:56.245 } 00:09:56.245 ] 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.245 BaseBdev4 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.245 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.245 [ 00:09:56.245 { 00:09:56.245 "name": "BaseBdev4", 00:09:56.245 "aliases": [ 00:09:56.245 "8ba0662f-2e2d-4753-a18a-bcef687825f5" 00:09:56.245 ], 00:09:56.245 "product_name": "Malloc disk", 00:09:56.245 "block_size": 512, 00:09:56.245 "num_blocks": 65536, 00:09:56.245 "uuid": "8ba0662f-2e2d-4753-a18a-bcef687825f5", 00:09:56.245 "assigned_rate_limits": { 00:09:56.245 "rw_ios_per_sec": 0, 00:09:56.245 "rw_mbytes_per_sec": 0, 00:09:56.245 "r_mbytes_per_sec": 0, 00:09:56.245 "w_mbytes_per_sec": 0 00:09:56.245 }, 00:09:56.245 "claimed": false, 00:09:56.245 "zoned": false, 00:09:56.245 "supported_io_types": { 00:09:56.245 "read": true, 00:09:56.245 "write": true, 00:09:56.245 "unmap": true, 00:09:56.245 "flush": true, 00:09:56.245 "reset": true, 00:09:56.245 "nvme_admin": false, 00:09:56.245 "nvme_io": false, 00:09:56.245 "nvme_io_md": false, 00:09:56.245 "write_zeroes": true, 00:09:56.245 "zcopy": true, 00:09:56.245 "get_zone_info": false, 00:09:56.245 "zone_management": false, 00:09:56.245 "zone_append": false, 00:09:56.246 "compare": false, 00:09:56.246 "compare_and_write": false, 00:09:56.246 "abort": true, 00:09:56.246 "seek_hole": false, 00:09:56.246 "seek_data": false, 00:09:56.246 "copy": true, 00:09:56.246 "nvme_iov_md": false 00:09:56.246 }, 00:09:56.246 "memory_domains": [ 00:09:56.246 { 00:09:56.246 "dma_device_id": "system", 00:09:56.246 "dma_device_type": 1 00:09:56.246 }, 00:09:56.246 { 00:09:56.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.246 "dma_device_type": 2 00:09:56.246 } 00:09:56.246 ], 00:09:56.246 "driver_specific": {} 00:09:56.246 } 00:09:56.246 ] 00:09:56.246 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.246 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:56.246 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:56.246 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:56.246 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:56.246 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.246 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.246 [2024-10-15 01:11:08.965726] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:56.246 [2024-10-15 01:11:08.965769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:56.246 [2024-10-15 01:11:08.965797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.246 [2024-10-15 01:11:08.967529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:56.246 [2024-10-15 01:11:08.967578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:56.506 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.506 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:56.506 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.506 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.506 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.506 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.506 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.506 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.506 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.506 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.506 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.506 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.506 01:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.506 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.506 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.506 01:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.506 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.506 "name": "Existed_Raid", 00:09:56.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.506 "strip_size_kb": 64, 00:09:56.506 "state": "configuring", 00:09:56.506 "raid_level": "concat", 00:09:56.506 "superblock": false, 00:09:56.506 "num_base_bdevs": 4, 00:09:56.506 "num_base_bdevs_discovered": 3, 00:09:56.506 "num_base_bdevs_operational": 4, 00:09:56.506 "base_bdevs_list": [ 00:09:56.506 { 00:09:56.506 "name": "BaseBdev1", 00:09:56.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.506 "is_configured": false, 00:09:56.506 "data_offset": 0, 00:09:56.506 "data_size": 0 00:09:56.506 }, 00:09:56.506 { 00:09:56.506 "name": "BaseBdev2", 00:09:56.506 "uuid": "4254e6b4-0854-4c6a-98e5-3bcd6da8cea5", 00:09:56.506 "is_configured": true, 00:09:56.506 "data_offset": 0, 00:09:56.506 "data_size": 65536 00:09:56.506 }, 00:09:56.506 { 00:09:56.506 "name": "BaseBdev3", 00:09:56.506 "uuid": "ce9d7f6e-f29d-46bc-8fcf-92be7647659e", 00:09:56.506 "is_configured": true, 00:09:56.506 "data_offset": 0, 00:09:56.506 "data_size": 65536 00:09:56.506 }, 00:09:56.506 { 00:09:56.506 "name": "BaseBdev4", 00:09:56.506 "uuid": "8ba0662f-2e2d-4753-a18a-bcef687825f5", 00:09:56.506 "is_configured": true, 00:09:56.506 "data_offset": 0, 00:09:56.506 "data_size": 65536 00:09:56.506 } 00:09:56.506 ] 00:09:56.506 }' 00:09:56.506 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.506 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.766 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:56.766 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.766 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.766 [2024-10-15 01:11:09.409016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:56.766 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.766 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:56.766 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.766 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.766 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.766 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.766 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.766 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.766 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.766 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.766 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.766 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.766 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.766 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.766 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.766 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.766 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.766 "name": "Existed_Raid", 00:09:56.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.766 "strip_size_kb": 64, 00:09:56.766 "state": "configuring", 00:09:56.766 "raid_level": "concat", 00:09:56.766 "superblock": false, 00:09:56.766 "num_base_bdevs": 4, 00:09:56.766 "num_base_bdevs_discovered": 2, 00:09:56.766 "num_base_bdevs_operational": 4, 00:09:56.766 "base_bdevs_list": [ 00:09:56.766 { 00:09:56.766 "name": "BaseBdev1", 00:09:56.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.766 "is_configured": false, 00:09:56.766 "data_offset": 0, 00:09:56.766 "data_size": 0 00:09:56.766 }, 00:09:56.766 { 00:09:56.766 "name": null, 00:09:56.766 "uuid": "4254e6b4-0854-4c6a-98e5-3bcd6da8cea5", 00:09:56.766 "is_configured": false, 00:09:56.766 "data_offset": 0, 00:09:56.766 "data_size": 65536 00:09:56.766 }, 00:09:56.766 { 00:09:56.766 "name": "BaseBdev3", 00:09:56.766 "uuid": "ce9d7f6e-f29d-46bc-8fcf-92be7647659e", 00:09:56.766 "is_configured": true, 00:09:56.766 "data_offset": 0, 00:09:56.766 "data_size": 65536 00:09:56.766 }, 00:09:56.766 { 00:09:56.766 "name": "BaseBdev4", 00:09:56.766 "uuid": "8ba0662f-2e2d-4753-a18a-bcef687825f5", 00:09:56.766 "is_configured": true, 00:09:56.766 "data_offset": 0, 00:09:56.766 "data_size": 65536 00:09:56.766 } 00:09:56.766 ] 00:09:56.767 }' 00:09:56.767 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.767 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.337 [2024-10-15 01:11:09.879217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.337 BaseBdev1 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.337 [ 00:09:57.337 { 00:09:57.337 "name": "BaseBdev1", 00:09:57.337 "aliases": [ 00:09:57.337 "3df8de21-e3b5-4a0b-9532-7fb9fb8010e4" 00:09:57.337 ], 00:09:57.337 "product_name": "Malloc disk", 00:09:57.337 "block_size": 512, 00:09:57.337 "num_blocks": 65536, 00:09:57.337 "uuid": "3df8de21-e3b5-4a0b-9532-7fb9fb8010e4", 00:09:57.337 "assigned_rate_limits": { 00:09:57.337 "rw_ios_per_sec": 0, 00:09:57.337 "rw_mbytes_per_sec": 0, 00:09:57.337 "r_mbytes_per_sec": 0, 00:09:57.337 "w_mbytes_per_sec": 0 00:09:57.337 }, 00:09:57.337 "claimed": true, 00:09:57.337 "claim_type": "exclusive_write", 00:09:57.337 "zoned": false, 00:09:57.337 "supported_io_types": { 00:09:57.337 "read": true, 00:09:57.337 "write": true, 00:09:57.337 "unmap": true, 00:09:57.337 "flush": true, 00:09:57.337 "reset": true, 00:09:57.337 "nvme_admin": false, 00:09:57.337 "nvme_io": false, 00:09:57.337 "nvme_io_md": false, 00:09:57.337 "write_zeroes": true, 00:09:57.337 "zcopy": true, 00:09:57.337 "get_zone_info": false, 00:09:57.337 "zone_management": false, 00:09:57.337 "zone_append": false, 00:09:57.337 "compare": false, 00:09:57.337 "compare_and_write": false, 00:09:57.337 "abort": true, 00:09:57.337 "seek_hole": false, 00:09:57.337 "seek_data": false, 00:09:57.337 "copy": true, 00:09:57.337 "nvme_iov_md": false 00:09:57.337 }, 00:09:57.337 "memory_domains": [ 00:09:57.337 { 00:09:57.337 "dma_device_id": "system", 00:09:57.337 "dma_device_type": 1 00:09:57.337 }, 00:09:57.337 { 00:09:57.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.337 "dma_device_type": 2 00:09:57.337 } 00:09:57.337 ], 00:09:57.337 "driver_specific": {} 00:09:57.337 } 00:09:57.337 ] 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.337 "name": "Existed_Raid", 00:09:57.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.337 "strip_size_kb": 64, 00:09:57.337 "state": "configuring", 00:09:57.337 "raid_level": "concat", 00:09:57.337 "superblock": false, 00:09:57.337 "num_base_bdevs": 4, 00:09:57.337 "num_base_bdevs_discovered": 3, 00:09:57.337 "num_base_bdevs_operational": 4, 00:09:57.337 "base_bdevs_list": [ 00:09:57.337 { 00:09:57.337 "name": "BaseBdev1", 00:09:57.337 "uuid": "3df8de21-e3b5-4a0b-9532-7fb9fb8010e4", 00:09:57.337 "is_configured": true, 00:09:57.337 "data_offset": 0, 00:09:57.337 "data_size": 65536 00:09:57.337 }, 00:09:57.337 { 00:09:57.337 "name": null, 00:09:57.337 "uuid": "4254e6b4-0854-4c6a-98e5-3bcd6da8cea5", 00:09:57.337 "is_configured": false, 00:09:57.337 "data_offset": 0, 00:09:57.337 "data_size": 65536 00:09:57.337 }, 00:09:57.337 { 00:09:57.337 "name": "BaseBdev3", 00:09:57.337 "uuid": "ce9d7f6e-f29d-46bc-8fcf-92be7647659e", 00:09:57.337 "is_configured": true, 00:09:57.337 "data_offset": 0, 00:09:57.337 "data_size": 65536 00:09:57.337 }, 00:09:57.337 { 00:09:57.337 "name": "BaseBdev4", 00:09:57.337 "uuid": "8ba0662f-2e2d-4753-a18a-bcef687825f5", 00:09:57.337 "is_configured": true, 00:09:57.337 "data_offset": 0, 00:09:57.337 "data_size": 65536 00:09:57.337 } 00:09:57.337 ] 00:09:57.337 }' 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.337 01:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.907 [2024-10-15 01:11:10.426311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.907 "name": "Existed_Raid", 00:09:57.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.907 "strip_size_kb": 64, 00:09:57.907 "state": "configuring", 00:09:57.907 "raid_level": "concat", 00:09:57.907 "superblock": false, 00:09:57.907 "num_base_bdevs": 4, 00:09:57.907 "num_base_bdevs_discovered": 2, 00:09:57.907 "num_base_bdevs_operational": 4, 00:09:57.907 "base_bdevs_list": [ 00:09:57.907 { 00:09:57.907 "name": "BaseBdev1", 00:09:57.907 "uuid": "3df8de21-e3b5-4a0b-9532-7fb9fb8010e4", 00:09:57.907 "is_configured": true, 00:09:57.907 "data_offset": 0, 00:09:57.907 "data_size": 65536 00:09:57.907 }, 00:09:57.907 { 00:09:57.907 "name": null, 00:09:57.907 "uuid": "4254e6b4-0854-4c6a-98e5-3bcd6da8cea5", 00:09:57.907 "is_configured": false, 00:09:57.907 "data_offset": 0, 00:09:57.907 "data_size": 65536 00:09:57.907 }, 00:09:57.907 { 00:09:57.907 "name": null, 00:09:57.907 "uuid": "ce9d7f6e-f29d-46bc-8fcf-92be7647659e", 00:09:57.907 "is_configured": false, 00:09:57.907 "data_offset": 0, 00:09:57.907 "data_size": 65536 00:09:57.907 }, 00:09:57.907 { 00:09:57.907 "name": "BaseBdev4", 00:09:57.907 "uuid": "8ba0662f-2e2d-4753-a18a-bcef687825f5", 00:09:57.907 "is_configured": true, 00:09:57.907 "data_offset": 0, 00:09:57.907 "data_size": 65536 00:09:57.907 } 00:09:57.907 ] 00:09:57.907 }' 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.907 01:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.167 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:58.167 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.167 01:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.167 01:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.167 01:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.427 [2024-10-15 01:11:10.901495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.427 "name": "Existed_Raid", 00:09:58.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.427 "strip_size_kb": 64, 00:09:58.427 "state": "configuring", 00:09:58.427 "raid_level": "concat", 00:09:58.427 "superblock": false, 00:09:58.427 "num_base_bdevs": 4, 00:09:58.427 "num_base_bdevs_discovered": 3, 00:09:58.427 "num_base_bdevs_operational": 4, 00:09:58.427 "base_bdevs_list": [ 00:09:58.427 { 00:09:58.427 "name": "BaseBdev1", 00:09:58.427 "uuid": "3df8de21-e3b5-4a0b-9532-7fb9fb8010e4", 00:09:58.427 "is_configured": true, 00:09:58.427 "data_offset": 0, 00:09:58.427 "data_size": 65536 00:09:58.427 }, 00:09:58.427 { 00:09:58.427 "name": null, 00:09:58.427 "uuid": "4254e6b4-0854-4c6a-98e5-3bcd6da8cea5", 00:09:58.427 "is_configured": false, 00:09:58.427 "data_offset": 0, 00:09:58.427 "data_size": 65536 00:09:58.427 }, 00:09:58.427 { 00:09:58.427 "name": "BaseBdev3", 00:09:58.427 "uuid": "ce9d7f6e-f29d-46bc-8fcf-92be7647659e", 00:09:58.427 "is_configured": true, 00:09:58.427 "data_offset": 0, 00:09:58.427 "data_size": 65536 00:09:58.427 }, 00:09:58.427 { 00:09:58.427 "name": "BaseBdev4", 00:09:58.427 "uuid": "8ba0662f-2e2d-4753-a18a-bcef687825f5", 00:09:58.427 "is_configured": true, 00:09:58.427 "data_offset": 0, 00:09:58.427 "data_size": 65536 00:09:58.427 } 00:09:58.427 ] 00:09:58.427 }' 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.427 01:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.688 [2024-10-15 01:11:11.356763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.688 01:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.948 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.948 "name": "Existed_Raid", 00:09:58.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.948 "strip_size_kb": 64, 00:09:58.948 "state": "configuring", 00:09:58.948 "raid_level": "concat", 00:09:58.948 "superblock": false, 00:09:58.948 "num_base_bdevs": 4, 00:09:58.948 "num_base_bdevs_discovered": 2, 00:09:58.948 "num_base_bdevs_operational": 4, 00:09:58.948 "base_bdevs_list": [ 00:09:58.948 { 00:09:58.948 "name": null, 00:09:58.948 "uuid": "3df8de21-e3b5-4a0b-9532-7fb9fb8010e4", 00:09:58.948 "is_configured": false, 00:09:58.948 "data_offset": 0, 00:09:58.948 "data_size": 65536 00:09:58.948 }, 00:09:58.948 { 00:09:58.948 "name": null, 00:09:58.948 "uuid": "4254e6b4-0854-4c6a-98e5-3bcd6da8cea5", 00:09:58.948 "is_configured": false, 00:09:58.948 "data_offset": 0, 00:09:58.948 "data_size": 65536 00:09:58.948 }, 00:09:58.948 { 00:09:58.948 "name": "BaseBdev3", 00:09:58.948 "uuid": "ce9d7f6e-f29d-46bc-8fcf-92be7647659e", 00:09:58.948 "is_configured": true, 00:09:58.948 "data_offset": 0, 00:09:58.948 "data_size": 65536 00:09:58.948 }, 00:09:58.948 { 00:09:58.948 "name": "BaseBdev4", 00:09:58.948 "uuid": "8ba0662f-2e2d-4753-a18a-bcef687825f5", 00:09:58.948 "is_configured": true, 00:09:58.948 "data_offset": 0, 00:09:58.948 "data_size": 65536 00:09:58.948 } 00:09:58.948 ] 00:09:58.948 }' 00:09:58.948 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.948 01:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.207 [2024-10-15 01:11:11.862586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.207 01:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.208 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.208 "name": "Existed_Raid", 00:09:59.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.208 "strip_size_kb": 64, 00:09:59.208 "state": "configuring", 00:09:59.208 "raid_level": "concat", 00:09:59.208 "superblock": false, 00:09:59.208 "num_base_bdevs": 4, 00:09:59.208 "num_base_bdevs_discovered": 3, 00:09:59.208 "num_base_bdevs_operational": 4, 00:09:59.208 "base_bdevs_list": [ 00:09:59.208 { 00:09:59.208 "name": null, 00:09:59.208 "uuid": "3df8de21-e3b5-4a0b-9532-7fb9fb8010e4", 00:09:59.208 "is_configured": false, 00:09:59.208 "data_offset": 0, 00:09:59.208 "data_size": 65536 00:09:59.208 }, 00:09:59.208 { 00:09:59.208 "name": "BaseBdev2", 00:09:59.208 "uuid": "4254e6b4-0854-4c6a-98e5-3bcd6da8cea5", 00:09:59.208 "is_configured": true, 00:09:59.208 "data_offset": 0, 00:09:59.208 "data_size": 65536 00:09:59.208 }, 00:09:59.208 { 00:09:59.208 "name": "BaseBdev3", 00:09:59.208 "uuid": "ce9d7f6e-f29d-46bc-8fcf-92be7647659e", 00:09:59.208 "is_configured": true, 00:09:59.208 "data_offset": 0, 00:09:59.208 "data_size": 65536 00:09:59.208 }, 00:09:59.208 { 00:09:59.208 "name": "BaseBdev4", 00:09:59.208 "uuid": "8ba0662f-2e2d-4753-a18a-bcef687825f5", 00:09:59.208 "is_configured": true, 00:09:59.208 "data_offset": 0, 00:09:59.208 "data_size": 65536 00:09:59.208 } 00:09:59.208 ] 00:09:59.208 }' 00:09:59.208 01:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.208 01:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3df8de21-e3b5-4a0b-9532-7fb9fb8010e4 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.777 [2024-10-15 01:11:12.396721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:59.777 [2024-10-15 01:11:12.396767] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:59.777 [2024-10-15 01:11:12.396774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:59.777 [2024-10-15 01:11:12.397061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:59.777 [2024-10-15 01:11:12.397195] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:59.777 [2024-10-15 01:11:12.397213] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:59.777 [2024-10-15 01:11:12.397386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.777 NewBaseBdev 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.777 [ 00:09:59.777 { 00:09:59.777 "name": "NewBaseBdev", 00:09:59.777 "aliases": [ 00:09:59.777 "3df8de21-e3b5-4a0b-9532-7fb9fb8010e4" 00:09:59.777 ], 00:09:59.777 "product_name": "Malloc disk", 00:09:59.777 "block_size": 512, 00:09:59.777 "num_blocks": 65536, 00:09:59.777 "uuid": "3df8de21-e3b5-4a0b-9532-7fb9fb8010e4", 00:09:59.777 "assigned_rate_limits": { 00:09:59.777 "rw_ios_per_sec": 0, 00:09:59.777 "rw_mbytes_per_sec": 0, 00:09:59.777 "r_mbytes_per_sec": 0, 00:09:59.777 "w_mbytes_per_sec": 0 00:09:59.777 }, 00:09:59.777 "claimed": true, 00:09:59.777 "claim_type": "exclusive_write", 00:09:59.777 "zoned": false, 00:09:59.777 "supported_io_types": { 00:09:59.777 "read": true, 00:09:59.777 "write": true, 00:09:59.777 "unmap": true, 00:09:59.777 "flush": true, 00:09:59.777 "reset": true, 00:09:59.777 "nvme_admin": false, 00:09:59.777 "nvme_io": false, 00:09:59.777 "nvme_io_md": false, 00:09:59.777 "write_zeroes": true, 00:09:59.777 "zcopy": true, 00:09:59.777 "get_zone_info": false, 00:09:59.777 "zone_management": false, 00:09:59.777 "zone_append": false, 00:09:59.777 "compare": false, 00:09:59.777 "compare_and_write": false, 00:09:59.777 "abort": true, 00:09:59.777 "seek_hole": false, 00:09:59.777 "seek_data": false, 00:09:59.777 "copy": true, 00:09:59.777 "nvme_iov_md": false 00:09:59.777 }, 00:09:59.777 "memory_domains": [ 00:09:59.777 { 00:09:59.777 "dma_device_id": "system", 00:09:59.777 "dma_device_type": 1 00:09:59.777 }, 00:09:59.777 { 00:09:59.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.777 "dma_device_type": 2 00:09:59.777 } 00:09:59.777 ], 00:09:59.777 "driver_specific": {} 00:09:59.777 } 00:09:59.777 ] 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.777 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.778 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.778 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.778 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.778 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.778 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.778 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.778 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.778 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.778 "name": "Existed_Raid", 00:09:59.778 "uuid": "55e7f204-926e-4dd9-bc4a-63f4f2d0a709", 00:09:59.778 "strip_size_kb": 64, 00:09:59.778 "state": "online", 00:09:59.778 "raid_level": "concat", 00:09:59.778 "superblock": false, 00:09:59.778 "num_base_bdevs": 4, 00:09:59.778 "num_base_bdevs_discovered": 4, 00:09:59.778 "num_base_bdevs_operational": 4, 00:09:59.778 "base_bdevs_list": [ 00:09:59.778 { 00:09:59.778 "name": "NewBaseBdev", 00:09:59.778 "uuid": "3df8de21-e3b5-4a0b-9532-7fb9fb8010e4", 00:09:59.778 "is_configured": true, 00:09:59.778 "data_offset": 0, 00:09:59.778 "data_size": 65536 00:09:59.778 }, 00:09:59.778 { 00:09:59.778 "name": "BaseBdev2", 00:09:59.778 "uuid": "4254e6b4-0854-4c6a-98e5-3bcd6da8cea5", 00:09:59.778 "is_configured": true, 00:09:59.778 "data_offset": 0, 00:09:59.778 "data_size": 65536 00:09:59.778 }, 00:09:59.778 { 00:09:59.778 "name": "BaseBdev3", 00:09:59.778 "uuid": "ce9d7f6e-f29d-46bc-8fcf-92be7647659e", 00:09:59.778 "is_configured": true, 00:09:59.778 "data_offset": 0, 00:09:59.778 "data_size": 65536 00:09:59.778 }, 00:09:59.778 { 00:09:59.778 "name": "BaseBdev4", 00:09:59.778 "uuid": "8ba0662f-2e2d-4753-a18a-bcef687825f5", 00:09:59.778 "is_configured": true, 00:09:59.778 "data_offset": 0, 00:09:59.778 "data_size": 65536 00:09:59.778 } 00:09:59.778 ] 00:09:59.778 }' 00:09:59.778 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.778 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.348 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:00.348 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:00.348 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:00.348 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:00.348 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:00.348 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:00.348 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:00.348 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:00.348 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.348 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.348 [2024-10-15 01:11:12.832391] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.348 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.348 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:00.348 "name": "Existed_Raid", 00:10:00.348 "aliases": [ 00:10:00.348 "55e7f204-926e-4dd9-bc4a-63f4f2d0a709" 00:10:00.348 ], 00:10:00.348 "product_name": "Raid Volume", 00:10:00.348 "block_size": 512, 00:10:00.348 "num_blocks": 262144, 00:10:00.348 "uuid": "55e7f204-926e-4dd9-bc4a-63f4f2d0a709", 00:10:00.348 "assigned_rate_limits": { 00:10:00.348 "rw_ios_per_sec": 0, 00:10:00.348 "rw_mbytes_per_sec": 0, 00:10:00.348 "r_mbytes_per_sec": 0, 00:10:00.348 "w_mbytes_per_sec": 0 00:10:00.348 }, 00:10:00.348 "claimed": false, 00:10:00.348 "zoned": false, 00:10:00.348 "supported_io_types": { 00:10:00.348 "read": true, 00:10:00.348 "write": true, 00:10:00.348 "unmap": true, 00:10:00.348 "flush": true, 00:10:00.348 "reset": true, 00:10:00.348 "nvme_admin": false, 00:10:00.348 "nvme_io": false, 00:10:00.348 "nvme_io_md": false, 00:10:00.348 "write_zeroes": true, 00:10:00.348 "zcopy": false, 00:10:00.348 "get_zone_info": false, 00:10:00.348 "zone_management": false, 00:10:00.348 "zone_append": false, 00:10:00.348 "compare": false, 00:10:00.348 "compare_and_write": false, 00:10:00.348 "abort": false, 00:10:00.348 "seek_hole": false, 00:10:00.348 "seek_data": false, 00:10:00.348 "copy": false, 00:10:00.348 "nvme_iov_md": false 00:10:00.348 }, 00:10:00.348 "memory_domains": [ 00:10:00.348 { 00:10:00.348 "dma_device_id": "system", 00:10:00.348 "dma_device_type": 1 00:10:00.348 }, 00:10:00.348 { 00:10:00.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.348 "dma_device_type": 2 00:10:00.348 }, 00:10:00.348 { 00:10:00.348 "dma_device_id": "system", 00:10:00.348 "dma_device_type": 1 00:10:00.348 }, 00:10:00.348 { 00:10:00.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.348 "dma_device_type": 2 00:10:00.348 }, 00:10:00.348 { 00:10:00.348 "dma_device_id": "system", 00:10:00.348 "dma_device_type": 1 00:10:00.348 }, 00:10:00.348 { 00:10:00.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.348 "dma_device_type": 2 00:10:00.348 }, 00:10:00.348 { 00:10:00.348 "dma_device_id": "system", 00:10:00.348 "dma_device_type": 1 00:10:00.348 }, 00:10:00.348 { 00:10:00.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.348 "dma_device_type": 2 00:10:00.348 } 00:10:00.348 ], 00:10:00.348 "driver_specific": { 00:10:00.348 "raid": { 00:10:00.348 "uuid": "55e7f204-926e-4dd9-bc4a-63f4f2d0a709", 00:10:00.348 "strip_size_kb": 64, 00:10:00.348 "state": "online", 00:10:00.348 "raid_level": "concat", 00:10:00.348 "superblock": false, 00:10:00.348 "num_base_bdevs": 4, 00:10:00.349 "num_base_bdevs_discovered": 4, 00:10:00.349 "num_base_bdevs_operational": 4, 00:10:00.349 "base_bdevs_list": [ 00:10:00.349 { 00:10:00.349 "name": "NewBaseBdev", 00:10:00.349 "uuid": "3df8de21-e3b5-4a0b-9532-7fb9fb8010e4", 00:10:00.349 "is_configured": true, 00:10:00.349 "data_offset": 0, 00:10:00.349 "data_size": 65536 00:10:00.349 }, 00:10:00.349 { 00:10:00.349 "name": "BaseBdev2", 00:10:00.349 "uuid": "4254e6b4-0854-4c6a-98e5-3bcd6da8cea5", 00:10:00.349 "is_configured": true, 00:10:00.349 "data_offset": 0, 00:10:00.349 "data_size": 65536 00:10:00.349 }, 00:10:00.349 { 00:10:00.349 "name": "BaseBdev3", 00:10:00.349 "uuid": "ce9d7f6e-f29d-46bc-8fcf-92be7647659e", 00:10:00.349 "is_configured": true, 00:10:00.349 "data_offset": 0, 00:10:00.349 "data_size": 65536 00:10:00.349 }, 00:10:00.349 { 00:10:00.349 "name": "BaseBdev4", 00:10:00.349 "uuid": "8ba0662f-2e2d-4753-a18a-bcef687825f5", 00:10:00.349 "is_configured": true, 00:10:00.349 "data_offset": 0, 00:10:00.349 "data_size": 65536 00:10:00.349 } 00:10:00.349 ] 00:10:00.349 } 00:10:00.349 } 00:10:00.349 }' 00:10:00.349 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:00.349 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:00.349 BaseBdev2 00:10:00.349 BaseBdev3 00:10:00.349 BaseBdev4' 00:10:00.349 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.349 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:00.349 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.349 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:00.349 01:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.349 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.349 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.349 01:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.349 01:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.349 01:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.349 01:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.349 01:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:00.349 01:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.349 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.349 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.349 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.349 01:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.349 01:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.349 01:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.349 01:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:00.349 01:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.349 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.349 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.610 [2024-10-15 01:11:13.159454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:00.610 [2024-10-15 01:11:13.159485] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.610 [2024-10-15 01:11:13.159557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.610 [2024-10-15 01:11:13.159648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.610 [2024-10-15 01:11:13.159664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 81905 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 81905 ']' 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 81905 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81905 00:10:00.610 killing process with pid 81905 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81905' 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 81905 00:10:00.610 [2024-10-15 01:11:13.208730] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:00.610 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 81905 00:10:00.610 [2024-10-15 01:11:13.249075] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:00.870 00:10:00.870 real 0m9.480s 00:10:00.870 user 0m16.331s 00:10:00.870 sys 0m1.893s 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.870 ************************************ 00:10:00.870 END TEST raid_state_function_test 00:10:00.870 ************************************ 00:10:00.870 01:11:13 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:00.870 01:11:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:00.870 01:11:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:00.870 01:11:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:00.870 ************************************ 00:10:00.870 START TEST raid_state_function_test_sb 00:10:00.870 ************************************ 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82555 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82555' 00:10:00.870 Process raid pid: 82555 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82555 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82555 ']' 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.870 01:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.871 01:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.871 01:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.130 [2024-10-15 01:11:13.631978] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:10:01.131 [2024-10-15 01:11:13.632101] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.131 [2024-10-15 01:11:13.776063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.131 [2024-10-15 01:11:13.802479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.131 [2024-10-15 01:11:13.845153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.131 [2024-10-15 01:11:13.845217] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.089 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.089 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:02.089 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:02.089 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.089 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.089 [2024-10-15 01:11:14.455167] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.089 [2024-10-15 01:11:14.455237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.089 [2024-10-15 01:11:14.455254] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.089 [2024-10-15 01:11:14.455265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.089 [2024-10-15 01:11:14.455271] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.089 [2024-10-15 01:11:14.455282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.089 [2024-10-15 01:11:14.455288] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:02.089 [2024-10-15 01:11:14.455297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:02.089 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.089 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:02.089 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.089 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.089 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.090 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.090 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.090 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.090 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.090 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.090 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.090 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.090 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.090 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.090 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.090 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.090 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.090 "name": "Existed_Raid", 00:10:02.090 "uuid": "eebaf7cc-1e92-42ca-aedb-f2629b66dde5", 00:10:02.090 "strip_size_kb": 64, 00:10:02.090 "state": "configuring", 00:10:02.090 "raid_level": "concat", 00:10:02.090 "superblock": true, 00:10:02.090 "num_base_bdevs": 4, 00:10:02.090 "num_base_bdevs_discovered": 0, 00:10:02.090 "num_base_bdevs_operational": 4, 00:10:02.090 "base_bdevs_list": [ 00:10:02.090 { 00:10:02.090 "name": "BaseBdev1", 00:10:02.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.090 "is_configured": false, 00:10:02.090 "data_offset": 0, 00:10:02.090 "data_size": 0 00:10:02.090 }, 00:10:02.090 { 00:10:02.090 "name": "BaseBdev2", 00:10:02.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.090 "is_configured": false, 00:10:02.090 "data_offset": 0, 00:10:02.090 "data_size": 0 00:10:02.090 }, 00:10:02.090 { 00:10:02.090 "name": "BaseBdev3", 00:10:02.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.090 "is_configured": false, 00:10:02.090 "data_offset": 0, 00:10:02.090 "data_size": 0 00:10:02.090 }, 00:10:02.090 { 00:10:02.090 "name": "BaseBdev4", 00:10:02.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.090 "is_configured": false, 00:10:02.090 "data_offset": 0, 00:10:02.090 "data_size": 0 00:10:02.090 } 00:10:02.090 ] 00:10:02.090 }' 00:10:02.090 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.090 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.350 [2024-10-15 01:11:14.918252] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.350 [2024-10-15 01:11:14.918296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.350 [2024-10-15 01:11:14.930248] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.350 [2024-10-15 01:11:14.930281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.350 [2024-10-15 01:11:14.930289] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.350 [2024-10-15 01:11:14.930297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.350 [2024-10-15 01:11:14.930303] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.350 [2024-10-15 01:11:14.930311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.350 [2024-10-15 01:11:14.930317] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:02.350 [2024-10-15 01:11:14.930325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.350 [2024-10-15 01:11:14.951016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.350 BaseBdev1 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.350 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.350 [ 00:10:02.350 { 00:10:02.350 "name": "BaseBdev1", 00:10:02.350 "aliases": [ 00:10:02.350 "ad8129e4-ce4e-4a5e-876b-b1f22f329eb6" 00:10:02.350 ], 00:10:02.350 "product_name": "Malloc disk", 00:10:02.350 "block_size": 512, 00:10:02.350 "num_blocks": 65536, 00:10:02.350 "uuid": "ad8129e4-ce4e-4a5e-876b-b1f22f329eb6", 00:10:02.350 "assigned_rate_limits": { 00:10:02.350 "rw_ios_per_sec": 0, 00:10:02.350 "rw_mbytes_per_sec": 0, 00:10:02.350 "r_mbytes_per_sec": 0, 00:10:02.350 "w_mbytes_per_sec": 0 00:10:02.350 }, 00:10:02.350 "claimed": true, 00:10:02.350 "claim_type": "exclusive_write", 00:10:02.350 "zoned": false, 00:10:02.350 "supported_io_types": { 00:10:02.350 "read": true, 00:10:02.350 "write": true, 00:10:02.350 "unmap": true, 00:10:02.350 "flush": true, 00:10:02.350 "reset": true, 00:10:02.350 "nvme_admin": false, 00:10:02.350 "nvme_io": false, 00:10:02.350 "nvme_io_md": false, 00:10:02.350 "write_zeroes": true, 00:10:02.350 "zcopy": true, 00:10:02.350 "get_zone_info": false, 00:10:02.350 "zone_management": false, 00:10:02.350 "zone_append": false, 00:10:02.350 "compare": false, 00:10:02.350 "compare_and_write": false, 00:10:02.350 "abort": true, 00:10:02.350 "seek_hole": false, 00:10:02.350 "seek_data": false, 00:10:02.350 "copy": true, 00:10:02.350 "nvme_iov_md": false 00:10:02.350 }, 00:10:02.350 "memory_domains": [ 00:10:02.351 { 00:10:02.351 "dma_device_id": "system", 00:10:02.351 "dma_device_type": 1 00:10:02.351 }, 00:10:02.351 { 00:10:02.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.351 "dma_device_type": 2 00:10:02.351 } 00:10:02.351 ], 00:10:02.351 "driver_specific": {} 00:10:02.351 } 00:10:02.351 ] 00:10:02.351 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.351 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:02.351 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:02.351 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.351 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.351 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.351 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.351 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.351 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.351 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.351 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.351 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.351 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.351 01:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.351 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.351 01:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.351 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.351 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.351 "name": "Existed_Raid", 00:10:02.351 "uuid": "54f2f0ab-8760-4c6d-9d6a-4831320345a9", 00:10:02.351 "strip_size_kb": 64, 00:10:02.351 "state": "configuring", 00:10:02.351 "raid_level": "concat", 00:10:02.351 "superblock": true, 00:10:02.351 "num_base_bdevs": 4, 00:10:02.351 "num_base_bdevs_discovered": 1, 00:10:02.351 "num_base_bdevs_operational": 4, 00:10:02.351 "base_bdevs_list": [ 00:10:02.351 { 00:10:02.351 "name": "BaseBdev1", 00:10:02.351 "uuid": "ad8129e4-ce4e-4a5e-876b-b1f22f329eb6", 00:10:02.351 "is_configured": true, 00:10:02.351 "data_offset": 2048, 00:10:02.351 "data_size": 63488 00:10:02.351 }, 00:10:02.351 { 00:10:02.351 "name": "BaseBdev2", 00:10:02.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.351 "is_configured": false, 00:10:02.351 "data_offset": 0, 00:10:02.351 "data_size": 0 00:10:02.351 }, 00:10:02.351 { 00:10:02.351 "name": "BaseBdev3", 00:10:02.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.351 "is_configured": false, 00:10:02.351 "data_offset": 0, 00:10:02.351 "data_size": 0 00:10:02.351 }, 00:10:02.351 { 00:10:02.351 "name": "BaseBdev4", 00:10:02.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.351 "is_configured": false, 00:10:02.351 "data_offset": 0, 00:10:02.351 "data_size": 0 00:10:02.351 } 00:10:02.351 ] 00:10:02.351 }' 00:10:02.351 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.351 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.921 [2024-10-15 01:11:15.426258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.921 [2024-10-15 01:11:15.426318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.921 [2024-10-15 01:11:15.438315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.921 [2024-10-15 01:11:15.440219] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.921 [2024-10-15 01:11:15.440255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.921 [2024-10-15 01:11:15.440264] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.921 [2024-10-15 01:11:15.440273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.921 [2024-10-15 01:11:15.440279] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:02.921 [2024-10-15 01:11:15.440288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.921 "name": "Existed_Raid", 00:10:02.921 "uuid": "a5666efe-eba1-4211-943f-50df26cb702d", 00:10:02.921 "strip_size_kb": 64, 00:10:02.921 "state": "configuring", 00:10:02.921 "raid_level": "concat", 00:10:02.921 "superblock": true, 00:10:02.921 "num_base_bdevs": 4, 00:10:02.921 "num_base_bdevs_discovered": 1, 00:10:02.921 "num_base_bdevs_operational": 4, 00:10:02.921 "base_bdevs_list": [ 00:10:02.921 { 00:10:02.921 "name": "BaseBdev1", 00:10:02.921 "uuid": "ad8129e4-ce4e-4a5e-876b-b1f22f329eb6", 00:10:02.921 "is_configured": true, 00:10:02.921 "data_offset": 2048, 00:10:02.921 "data_size": 63488 00:10:02.921 }, 00:10:02.921 { 00:10:02.921 "name": "BaseBdev2", 00:10:02.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.921 "is_configured": false, 00:10:02.921 "data_offset": 0, 00:10:02.921 "data_size": 0 00:10:02.921 }, 00:10:02.921 { 00:10:02.921 "name": "BaseBdev3", 00:10:02.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.921 "is_configured": false, 00:10:02.921 "data_offset": 0, 00:10:02.921 "data_size": 0 00:10:02.921 }, 00:10:02.921 { 00:10:02.921 "name": "BaseBdev4", 00:10:02.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.921 "is_configured": false, 00:10:02.921 "data_offset": 0, 00:10:02.921 "data_size": 0 00:10:02.921 } 00:10:02.921 ] 00:10:02.921 }' 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.921 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.181 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:03.181 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.181 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.181 [2024-10-15 01:11:15.860645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.181 BaseBdev2 00:10:03.181 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.181 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:03.181 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:03.181 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:03.181 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:03.181 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:03.181 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:03.181 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:03.181 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.181 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.181 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.181 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:03.181 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.181 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.181 [ 00:10:03.181 { 00:10:03.181 "name": "BaseBdev2", 00:10:03.181 "aliases": [ 00:10:03.181 "13927146-948a-4d89-85e3-e5fa451c2193" 00:10:03.181 ], 00:10:03.181 "product_name": "Malloc disk", 00:10:03.181 "block_size": 512, 00:10:03.181 "num_blocks": 65536, 00:10:03.181 "uuid": "13927146-948a-4d89-85e3-e5fa451c2193", 00:10:03.181 "assigned_rate_limits": { 00:10:03.181 "rw_ios_per_sec": 0, 00:10:03.181 "rw_mbytes_per_sec": 0, 00:10:03.181 "r_mbytes_per_sec": 0, 00:10:03.181 "w_mbytes_per_sec": 0 00:10:03.181 }, 00:10:03.181 "claimed": true, 00:10:03.181 "claim_type": "exclusive_write", 00:10:03.181 "zoned": false, 00:10:03.181 "supported_io_types": { 00:10:03.181 "read": true, 00:10:03.181 "write": true, 00:10:03.181 "unmap": true, 00:10:03.181 "flush": true, 00:10:03.181 "reset": true, 00:10:03.181 "nvme_admin": false, 00:10:03.181 "nvme_io": false, 00:10:03.181 "nvme_io_md": false, 00:10:03.181 "write_zeroes": true, 00:10:03.181 "zcopy": true, 00:10:03.181 "get_zone_info": false, 00:10:03.181 "zone_management": false, 00:10:03.181 "zone_append": false, 00:10:03.181 "compare": false, 00:10:03.181 "compare_and_write": false, 00:10:03.181 "abort": true, 00:10:03.181 "seek_hole": false, 00:10:03.181 "seek_data": false, 00:10:03.181 "copy": true, 00:10:03.181 "nvme_iov_md": false 00:10:03.181 }, 00:10:03.181 "memory_domains": [ 00:10:03.181 { 00:10:03.181 "dma_device_id": "system", 00:10:03.181 "dma_device_type": 1 00:10:03.181 }, 00:10:03.181 { 00:10:03.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.181 "dma_device_type": 2 00:10:03.181 } 00:10:03.181 ], 00:10:03.181 "driver_specific": {} 00:10:03.181 } 00:10:03.181 ] 00:10:03.182 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.182 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:03.182 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:03.182 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.182 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:03.182 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.182 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.182 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.182 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.182 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.182 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.182 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.182 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.182 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.182 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.182 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.182 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.182 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.441 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.441 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.441 "name": "Existed_Raid", 00:10:03.441 "uuid": "a5666efe-eba1-4211-943f-50df26cb702d", 00:10:03.441 "strip_size_kb": 64, 00:10:03.441 "state": "configuring", 00:10:03.441 "raid_level": "concat", 00:10:03.441 "superblock": true, 00:10:03.441 "num_base_bdevs": 4, 00:10:03.441 "num_base_bdevs_discovered": 2, 00:10:03.441 "num_base_bdevs_operational": 4, 00:10:03.441 "base_bdevs_list": [ 00:10:03.441 { 00:10:03.441 "name": "BaseBdev1", 00:10:03.441 "uuid": "ad8129e4-ce4e-4a5e-876b-b1f22f329eb6", 00:10:03.441 "is_configured": true, 00:10:03.441 "data_offset": 2048, 00:10:03.441 "data_size": 63488 00:10:03.441 }, 00:10:03.441 { 00:10:03.441 "name": "BaseBdev2", 00:10:03.441 "uuid": "13927146-948a-4d89-85e3-e5fa451c2193", 00:10:03.441 "is_configured": true, 00:10:03.441 "data_offset": 2048, 00:10:03.441 "data_size": 63488 00:10:03.441 }, 00:10:03.441 { 00:10:03.441 "name": "BaseBdev3", 00:10:03.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.441 "is_configured": false, 00:10:03.441 "data_offset": 0, 00:10:03.441 "data_size": 0 00:10:03.441 }, 00:10:03.441 { 00:10:03.441 "name": "BaseBdev4", 00:10:03.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.441 "is_configured": false, 00:10:03.441 "data_offset": 0, 00:10:03.441 "data_size": 0 00:10:03.441 } 00:10:03.441 ] 00:10:03.441 }' 00:10:03.441 01:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.441 01:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.701 [2024-10-15 01:11:16.302630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:03.701 BaseBdev3 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.701 [ 00:10:03.701 { 00:10:03.701 "name": "BaseBdev3", 00:10:03.701 "aliases": [ 00:10:03.701 "a8729be9-8a92-4a2b-98c6-ad24fd31d9bf" 00:10:03.701 ], 00:10:03.701 "product_name": "Malloc disk", 00:10:03.701 "block_size": 512, 00:10:03.701 "num_blocks": 65536, 00:10:03.701 "uuid": "a8729be9-8a92-4a2b-98c6-ad24fd31d9bf", 00:10:03.701 "assigned_rate_limits": { 00:10:03.701 "rw_ios_per_sec": 0, 00:10:03.701 "rw_mbytes_per_sec": 0, 00:10:03.701 "r_mbytes_per_sec": 0, 00:10:03.701 "w_mbytes_per_sec": 0 00:10:03.701 }, 00:10:03.701 "claimed": true, 00:10:03.701 "claim_type": "exclusive_write", 00:10:03.701 "zoned": false, 00:10:03.701 "supported_io_types": { 00:10:03.701 "read": true, 00:10:03.701 "write": true, 00:10:03.701 "unmap": true, 00:10:03.701 "flush": true, 00:10:03.701 "reset": true, 00:10:03.701 "nvme_admin": false, 00:10:03.701 "nvme_io": false, 00:10:03.701 "nvme_io_md": false, 00:10:03.701 "write_zeroes": true, 00:10:03.701 "zcopy": true, 00:10:03.701 "get_zone_info": false, 00:10:03.701 "zone_management": false, 00:10:03.701 "zone_append": false, 00:10:03.701 "compare": false, 00:10:03.701 "compare_and_write": false, 00:10:03.701 "abort": true, 00:10:03.701 "seek_hole": false, 00:10:03.701 "seek_data": false, 00:10:03.701 "copy": true, 00:10:03.701 "nvme_iov_md": false 00:10:03.701 }, 00:10:03.701 "memory_domains": [ 00:10:03.701 { 00:10:03.701 "dma_device_id": "system", 00:10:03.701 "dma_device_type": 1 00:10:03.701 }, 00:10:03.701 { 00:10:03.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.701 "dma_device_type": 2 00:10:03.701 } 00:10:03.701 ], 00:10:03.701 "driver_specific": {} 00:10:03.701 } 00:10:03.701 ] 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.701 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.701 "name": "Existed_Raid", 00:10:03.701 "uuid": "a5666efe-eba1-4211-943f-50df26cb702d", 00:10:03.701 "strip_size_kb": 64, 00:10:03.701 "state": "configuring", 00:10:03.701 "raid_level": "concat", 00:10:03.701 "superblock": true, 00:10:03.701 "num_base_bdevs": 4, 00:10:03.701 "num_base_bdevs_discovered": 3, 00:10:03.702 "num_base_bdevs_operational": 4, 00:10:03.702 "base_bdevs_list": [ 00:10:03.702 { 00:10:03.702 "name": "BaseBdev1", 00:10:03.702 "uuid": "ad8129e4-ce4e-4a5e-876b-b1f22f329eb6", 00:10:03.702 "is_configured": true, 00:10:03.702 "data_offset": 2048, 00:10:03.702 "data_size": 63488 00:10:03.702 }, 00:10:03.702 { 00:10:03.702 "name": "BaseBdev2", 00:10:03.702 "uuid": "13927146-948a-4d89-85e3-e5fa451c2193", 00:10:03.702 "is_configured": true, 00:10:03.702 "data_offset": 2048, 00:10:03.702 "data_size": 63488 00:10:03.702 }, 00:10:03.702 { 00:10:03.702 "name": "BaseBdev3", 00:10:03.702 "uuid": "a8729be9-8a92-4a2b-98c6-ad24fd31d9bf", 00:10:03.702 "is_configured": true, 00:10:03.702 "data_offset": 2048, 00:10:03.702 "data_size": 63488 00:10:03.702 }, 00:10:03.702 { 00:10:03.702 "name": "BaseBdev4", 00:10:03.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.702 "is_configured": false, 00:10:03.702 "data_offset": 0, 00:10:03.702 "data_size": 0 00:10:03.702 } 00:10:03.702 ] 00:10:03.702 }' 00:10:03.702 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.702 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.270 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:04.270 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.270 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.270 [2024-10-15 01:11:16.824934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:04.270 [2024-10-15 01:11:16.825146] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:04.270 [2024-10-15 01:11:16.825161] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:04.270 [2024-10-15 01:11:16.825461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:04.270 BaseBdev4 00:10:04.270 [2024-10-15 01:11:16.825620] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:04.270 [2024-10-15 01:11:16.825653] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:04.270 [2024-10-15 01:11:16.825790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.271 [ 00:10:04.271 { 00:10:04.271 "name": "BaseBdev4", 00:10:04.271 "aliases": [ 00:10:04.271 "58598dcc-87b4-4d26-91d5-3820e28cdfef" 00:10:04.271 ], 00:10:04.271 "product_name": "Malloc disk", 00:10:04.271 "block_size": 512, 00:10:04.271 "num_blocks": 65536, 00:10:04.271 "uuid": "58598dcc-87b4-4d26-91d5-3820e28cdfef", 00:10:04.271 "assigned_rate_limits": { 00:10:04.271 "rw_ios_per_sec": 0, 00:10:04.271 "rw_mbytes_per_sec": 0, 00:10:04.271 "r_mbytes_per_sec": 0, 00:10:04.271 "w_mbytes_per_sec": 0 00:10:04.271 }, 00:10:04.271 "claimed": true, 00:10:04.271 "claim_type": "exclusive_write", 00:10:04.271 "zoned": false, 00:10:04.271 "supported_io_types": { 00:10:04.271 "read": true, 00:10:04.271 "write": true, 00:10:04.271 "unmap": true, 00:10:04.271 "flush": true, 00:10:04.271 "reset": true, 00:10:04.271 "nvme_admin": false, 00:10:04.271 "nvme_io": false, 00:10:04.271 "nvme_io_md": false, 00:10:04.271 "write_zeroes": true, 00:10:04.271 "zcopy": true, 00:10:04.271 "get_zone_info": false, 00:10:04.271 "zone_management": false, 00:10:04.271 "zone_append": false, 00:10:04.271 "compare": false, 00:10:04.271 "compare_and_write": false, 00:10:04.271 "abort": true, 00:10:04.271 "seek_hole": false, 00:10:04.271 "seek_data": false, 00:10:04.271 "copy": true, 00:10:04.271 "nvme_iov_md": false 00:10:04.271 }, 00:10:04.271 "memory_domains": [ 00:10:04.271 { 00:10:04.271 "dma_device_id": "system", 00:10:04.271 "dma_device_type": 1 00:10:04.271 }, 00:10:04.271 { 00:10:04.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.271 "dma_device_type": 2 00:10:04.271 } 00:10:04.271 ], 00:10:04.271 "driver_specific": {} 00:10:04.271 } 00:10:04.271 ] 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.271 "name": "Existed_Raid", 00:10:04.271 "uuid": "a5666efe-eba1-4211-943f-50df26cb702d", 00:10:04.271 "strip_size_kb": 64, 00:10:04.271 "state": "online", 00:10:04.271 "raid_level": "concat", 00:10:04.271 "superblock": true, 00:10:04.271 "num_base_bdevs": 4, 00:10:04.271 "num_base_bdevs_discovered": 4, 00:10:04.271 "num_base_bdevs_operational": 4, 00:10:04.271 "base_bdevs_list": [ 00:10:04.271 { 00:10:04.271 "name": "BaseBdev1", 00:10:04.271 "uuid": "ad8129e4-ce4e-4a5e-876b-b1f22f329eb6", 00:10:04.271 "is_configured": true, 00:10:04.271 "data_offset": 2048, 00:10:04.271 "data_size": 63488 00:10:04.271 }, 00:10:04.271 { 00:10:04.271 "name": "BaseBdev2", 00:10:04.271 "uuid": "13927146-948a-4d89-85e3-e5fa451c2193", 00:10:04.271 "is_configured": true, 00:10:04.271 "data_offset": 2048, 00:10:04.271 "data_size": 63488 00:10:04.271 }, 00:10:04.271 { 00:10:04.271 "name": "BaseBdev3", 00:10:04.271 "uuid": "a8729be9-8a92-4a2b-98c6-ad24fd31d9bf", 00:10:04.271 "is_configured": true, 00:10:04.271 "data_offset": 2048, 00:10:04.271 "data_size": 63488 00:10:04.271 }, 00:10:04.271 { 00:10:04.271 "name": "BaseBdev4", 00:10:04.271 "uuid": "58598dcc-87b4-4d26-91d5-3820e28cdfef", 00:10:04.271 "is_configured": true, 00:10:04.271 "data_offset": 2048, 00:10:04.271 "data_size": 63488 00:10:04.271 } 00:10:04.271 ] 00:10:04.271 }' 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.271 01:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.841 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:04.841 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:04.841 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:04.841 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:04.841 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:04.841 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:04.841 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:04.841 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:04.841 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.841 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.841 [2024-10-15 01:11:17.296532] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.841 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.841 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:04.841 "name": "Existed_Raid", 00:10:04.841 "aliases": [ 00:10:04.841 "a5666efe-eba1-4211-943f-50df26cb702d" 00:10:04.841 ], 00:10:04.841 "product_name": "Raid Volume", 00:10:04.841 "block_size": 512, 00:10:04.842 "num_blocks": 253952, 00:10:04.842 "uuid": "a5666efe-eba1-4211-943f-50df26cb702d", 00:10:04.842 "assigned_rate_limits": { 00:10:04.842 "rw_ios_per_sec": 0, 00:10:04.842 "rw_mbytes_per_sec": 0, 00:10:04.842 "r_mbytes_per_sec": 0, 00:10:04.842 "w_mbytes_per_sec": 0 00:10:04.842 }, 00:10:04.842 "claimed": false, 00:10:04.842 "zoned": false, 00:10:04.842 "supported_io_types": { 00:10:04.842 "read": true, 00:10:04.842 "write": true, 00:10:04.842 "unmap": true, 00:10:04.842 "flush": true, 00:10:04.842 "reset": true, 00:10:04.842 "nvme_admin": false, 00:10:04.842 "nvme_io": false, 00:10:04.842 "nvme_io_md": false, 00:10:04.842 "write_zeroes": true, 00:10:04.842 "zcopy": false, 00:10:04.842 "get_zone_info": false, 00:10:04.842 "zone_management": false, 00:10:04.842 "zone_append": false, 00:10:04.842 "compare": false, 00:10:04.842 "compare_and_write": false, 00:10:04.842 "abort": false, 00:10:04.842 "seek_hole": false, 00:10:04.842 "seek_data": false, 00:10:04.842 "copy": false, 00:10:04.842 "nvme_iov_md": false 00:10:04.842 }, 00:10:04.842 "memory_domains": [ 00:10:04.842 { 00:10:04.842 "dma_device_id": "system", 00:10:04.842 "dma_device_type": 1 00:10:04.842 }, 00:10:04.842 { 00:10:04.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.842 "dma_device_type": 2 00:10:04.842 }, 00:10:04.842 { 00:10:04.842 "dma_device_id": "system", 00:10:04.842 "dma_device_type": 1 00:10:04.842 }, 00:10:04.842 { 00:10:04.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.842 "dma_device_type": 2 00:10:04.842 }, 00:10:04.842 { 00:10:04.842 "dma_device_id": "system", 00:10:04.842 "dma_device_type": 1 00:10:04.842 }, 00:10:04.842 { 00:10:04.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.842 "dma_device_type": 2 00:10:04.842 }, 00:10:04.842 { 00:10:04.842 "dma_device_id": "system", 00:10:04.842 "dma_device_type": 1 00:10:04.842 }, 00:10:04.842 { 00:10:04.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.842 "dma_device_type": 2 00:10:04.842 } 00:10:04.842 ], 00:10:04.842 "driver_specific": { 00:10:04.842 "raid": { 00:10:04.842 "uuid": "a5666efe-eba1-4211-943f-50df26cb702d", 00:10:04.842 "strip_size_kb": 64, 00:10:04.842 "state": "online", 00:10:04.842 "raid_level": "concat", 00:10:04.842 "superblock": true, 00:10:04.842 "num_base_bdevs": 4, 00:10:04.842 "num_base_bdevs_discovered": 4, 00:10:04.842 "num_base_bdevs_operational": 4, 00:10:04.842 "base_bdevs_list": [ 00:10:04.842 { 00:10:04.842 "name": "BaseBdev1", 00:10:04.842 "uuid": "ad8129e4-ce4e-4a5e-876b-b1f22f329eb6", 00:10:04.842 "is_configured": true, 00:10:04.842 "data_offset": 2048, 00:10:04.842 "data_size": 63488 00:10:04.842 }, 00:10:04.842 { 00:10:04.842 "name": "BaseBdev2", 00:10:04.842 "uuid": "13927146-948a-4d89-85e3-e5fa451c2193", 00:10:04.842 "is_configured": true, 00:10:04.842 "data_offset": 2048, 00:10:04.842 "data_size": 63488 00:10:04.842 }, 00:10:04.842 { 00:10:04.842 "name": "BaseBdev3", 00:10:04.842 "uuid": "a8729be9-8a92-4a2b-98c6-ad24fd31d9bf", 00:10:04.842 "is_configured": true, 00:10:04.842 "data_offset": 2048, 00:10:04.842 "data_size": 63488 00:10:04.842 }, 00:10:04.842 { 00:10:04.842 "name": "BaseBdev4", 00:10:04.842 "uuid": "58598dcc-87b4-4d26-91d5-3820e28cdfef", 00:10:04.842 "is_configured": true, 00:10:04.842 "data_offset": 2048, 00:10:04.842 "data_size": 63488 00:10:04.842 } 00:10:04.842 ] 00:10:04.842 } 00:10:04.842 } 00:10:04.842 }' 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:04.842 BaseBdev2 00:10:04.842 BaseBdev3 00:10:04.842 BaseBdev4' 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.842 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.842 [2024-10-15 01:11:17.555831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:04.842 [2024-10-15 01:11:17.555863] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.842 [2024-10-15 01:11:17.555921] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.102 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.102 "name": "Existed_Raid", 00:10:05.102 "uuid": "a5666efe-eba1-4211-943f-50df26cb702d", 00:10:05.102 "strip_size_kb": 64, 00:10:05.102 "state": "offline", 00:10:05.102 "raid_level": "concat", 00:10:05.102 "superblock": true, 00:10:05.102 "num_base_bdevs": 4, 00:10:05.102 "num_base_bdevs_discovered": 3, 00:10:05.102 "num_base_bdevs_operational": 3, 00:10:05.102 "base_bdevs_list": [ 00:10:05.102 { 00:10:05.102 "name": null, 00:10:05.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.102 "is_configured": false, 00:10:05.102 "data_offset": 0, 00:10:05.102 "data_size": 63488 00:10:05.102 }, 00:10:05.102 { 00:10:05.102 "name": "BaseBdev2", 00:10:05.102 "uuid": "13927146-948a-4d89-85e3-e5fa451c2193", 00:10:05.102 "is_configured": true, 00:10:05.102 "data_offset": 2048, 00:10:05.102 "data_size": 63488 00:10:05.102 }, 00:10:05.102 { 00:10:05.102 "name": "BaseBdev3", 00:10:05.102 "uuid": "a8729be9-8a92-4a2b-98c6-ad24fd31d9bf", 00:10:05.102 "is_configured": true, 00:10:05.102 "data_offset": 2048, 00:10:05.102 "data_size": 63488 00:10:05.102 }, 00:10:05.102 { 00:10:05.102 "name": "BaseBdev4", 00:10:05.102 "uuid": "58598dcc-87b4-4d26-91d5-3820e28cdfef", 00:10:05.102 "is_configured": true, 00:10:05.102 "data_offset": 2048, 00:10:05.103 "data_size": 63488 00:10:05.103 } 00:10:05.103 ] 00:10:05.103 }' 00:10:05.103 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.103 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.363 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:05.363 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.363 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:05.363 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.363 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.363 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.363 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.363 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:05.363 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:05.363 01:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:05.363 01:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.363 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.363 [2024-10-15 01:11:18.006476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:05.363 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.363 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:05.363 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.363 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.363 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.363 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.363 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:05.363 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.363 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:05.363 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:05.363 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:05.363 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.363 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.363 [2024-10-15 01:11:18.077616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.623 [2024-10-15 01:11:18.148725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:05.623 [2024-10-15 01:11:18.148814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.623 BaseBdev2 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.623 [ 00:10:05.623 { 00:10:05.623 "name": "BaseBdev2", 00:10:05.623 "aliases": [ 00:10:05.623 "5b0dc1b9-5271-4b3c-a334-3f441495c3af" 00:10:05.623 ], 00:10:05.623 "product_name": "Malloc disk", 00:10:05.623 "block_size": 512, 00:10:05.623 "num_blocks": 65536, 00:10:05.623 "uuid": "5b0dc1b9-5271-4b3c-a334-3f441495c3af", 00:10:05.623 "assigned_rate_limits": { 00:10:05.623 "rw_ios_per_sec": 0, 00:10:05.623 "rw_mbytes_per_sec": 0, 00:10:05.623 "r_mbytes_per_sec": 0, 00:10:05.623 "w_mbytes_per_sec": 0 00:10:05.623 }, 00:10:05.623 "claimed": false, 00:10:05.623 "zoned": false, 00:10:05.623 "supported_io_types": { 00:10:05.623 "read": true, 00:10:05.623 "write": true, 00:10:05.623 "unmap": true, 00:10:05.623 "flush": true, 00:10:05.623 "reset": true, 00:10:05.623 "nvme_admin": false, 00:10:05.623 "nvme_io": false, 00:10:05.623 "nvme_io_md": false, 00:10:05.623 "write_zeroes": true, 00:10:05.623 "zcopy": true, 00:10:05.623 "get_zone_info": false, 00:10:05.623 "zone_management": false, 00:10:05.623 "zone_append": false, 00:10:05.623 "compare": false, 00:10:05.623 "compare_and_write": false, 00:10:05.623 "abort": true, 00:10:05.623 "seek_hole": false, 00:10:05.623 "seek_data": false, 00:10:05.623 "copy": true, 00:10:05.623 "nvme_iov_md": false 00:10:05.623 }, 00:10:05.623 "memory_domains": [ 00:10:05.623 { 00:10:05.623 "dma_device_id": "system", 00:10:05.623 "dma_device_type": 1 00:10:05.623 }, 00:10:05.623 { 00:10:05.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.623 "dma_device_type": 2 00:10:05.623 } 00:10:05.623 ], 00:10:05.623 "driver_specific": {} 00:10:05.623 } 00:10:05.623 ] 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.623 BaseBdev3 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:05.623 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.624 [ 00:10:05.624 { 00:10:05.624 "name": "BaseBdev3", 00:10:05.624 "aliases": [ 00:10:05.624 "e643e0ba-e1cc-4593-a7d2-52ec2d27a6d4" 00:10:05.624 ], 00:10:05.624 "product_name": "Malloc disk", 00:10:05.624 "block_size": 512, 00:10:05.624 "num_blocks": 65536, 00:10:05.624 "uuid": "e643e0ba-e1cc-4593-a7d2-52ec2d27a6d4", 00:10:05.624 "assigned_rate_limits": { 00:10:05.624 "rw_ios_per_sec": 0, 00:10:05.624 "rw_mbytes_per_sec": 0, 00:10:05.624 "r_mbytes_per_sec": 0, 00:10:05.624 "w_mbytes_per_sec": 0 00:10:05.624 }, 00:10:05.624 "claimed": false, 00:10:05.624 "zoned": false, 00:10:05.624 "supported_io_types": { 00:10:05.624 "read": true, 00:10:05.624 "write": true, 00:10:05.624 "unmap": true, 00:10:05.624 "flush": true, 00:10:05.624 "reset": true, 00:10:05.624 "nvme_admin": false, 00:10:05.624 "nvme_io": false, 00:10:05.624 "nvme_io_md": false, 00:10:05.624 "write_zeroes": true, 00:10:05.624 "zcopy": true, 00:10:05.624 "get_zone_info": false, 00:10:05.624 "zone_management": false, 00:10:05.624 "zone_append": false, 00:10:05.624 "compare": false, 00:10:05.624 "compare_and_write": false, 00:10:05.624 "abort": true, 00:10:05.624 "seek_hole": false, 00:10:05.624 "seek_data": false, 00:10:05.624 "copy": true, 00:10:05.624 "nvme_iov_md": false 00:10:05.624 }, 00:10:05.624 "memory_domains": [ 00:10:05.624 { 00:10:05.624 "dma_device_id": "system", 00:10:05.624 "dma_device_type": 1 00:10:05.624 }, 00:10:05.624 { 00:10:05.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.624 "dma_device_type": 2 00:10:05.624 } 00:10:05.624 ], 00:10:05.624 "driver_specific": {} 00:10:05.624 } 00:10:05.624 ] 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.624 BaseBdev4 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.624 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.884 [ 00:10:05.884 { 00:10:05.884 "name": "BaseBdev4", 00:10:05.884 "aliases": [ 00:10:05.884 "0e347886-812e-4905-aec0-0f870eb3d7a6" 00:10:05.884 ], 00:10:05.884 "product_name": "Malloc disk", 00:10:05.884 "block_size": 512, 00:10:05.884 "num_blocks": 65536, 00:10:05.884 "uuid": "0e347886-812e-4905-aec0-0f870eb3d7a6", 00:10:05.884 "assigned_rate_limits": { 00:10:05.884 "rw_ios_per_sec": 0, 00:10:05.884 "rw_mbytes_per_sec": 0, 00:10:05.884 "r_mbytes_per_sec": 0, 00:10:05.884 "w_mbytes_per_sec": 0 00:10:05.884 }, 00:10:05.884 "claimed": false, 00:10:05.884 "zoned": false, 00:10:05.884 "supported_io_types": { 00:10:05.884 "read": true, 00:10:05.884 "write": true, 00:10:05.884 "unmap": true, 00:10:05.884 "flush": true, 00:10:05.884 "reset": true, 00:10:05.884 "nvme_admin": false, 00:10:05.884 "nvme_io": false, 00:10:05.884 "nvme_io_md": false, 00:10:05.884 "write_zeroes": true, 00:10:05.884 "zcopy": true, 00:10:05.884 "get_zone_info": false, 00:10:05.884 "zone_management": false, 00:10:05.884 "zone_append": false, 00:10:05.884 "compare": false, 00:10:05.884 "compare_and_write": false, 00:10:05.884 "abort": true, 00:10:05.884 "seek_hole": false, 00:10:05.884 "seek_data": false, 00:10:05.884 "copy": true, 00:10:05.884 "nvme_iov_md": false 00:10:05.884 }, 00:10:05.884 "memory_domains": [ 00:10:05.884 { 00:10:05.884 "dma_device_id": "system", 00:10:05.884 "dma_device_type": 1 00:10:05.884 }, 00:10:05.884 { 00:10:05.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.884 "dma_device_type": 2 00:10:05.884 } 00:10:05.884 ], 00:10:05.884 "driver_specific": {} 00:10:05.884 } 00:10:05.884 ] 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.884 [2024-10-15 01:11:18.380811] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.884 [2024-10-15 01:11:18.380909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.884 [2024-10-15 01:11:18.380980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.884 [2024-10-15 01:11:18.382820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:05.884 [2024-10-15 01:11:18.382905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.884 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.884 "name": "Existed_Raid", 00:10:05.884 "uuid": "1f05795f-8685-4368-886d-130cc8e86952", 00:10:05.884 "strip_size_kb": 64, 00:10:05.884 "state": "configuring", 00:10:05.884 "raid_level": "concat", 00:10:05.884 "superblock": true, 00:10:05.884 "num_base_bdevs": 4, 00:10:05.884 "num_base_bdevs_discovered": 3, 00:10:05.884 "num_base_bdevs_operational": 4, 00:10:05.884 "base_bdevs_list": [ 00:10:05.884 { 00:10:05.884 "name": "BaseBdev1", 00:10:05.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.884 "is_configured": false, 00:10:05.884 "data_offset": 0, 00:10:05.884 "data_size": 0 00:10:05.884 }, 00:10:05.884 { 00:10:05.884 "name": "BaseBdev2", 00:10:05.884 "uuid": "5b0dc1b9-5271-4b3c-a334-3f441495c3af", 00:10:05.884 "is_configured": true, 00:10:05.884 "data_offset": 2048, 00:10:05.884 "data_size": 63488 00:10:05.884 }, 00:10:05.884 { 00:10:05.884 "name": "BaseBdev3", 00:10:05.884 "uuid": "e643e0ba-e1cc-4593-a7d2-52ec2d27a6d4", 00:10:05.884 "is_configured": true, 00:10:05.884 "data_offset": 2048, 00:10:05.884 "data_size": 63488 00:10:05.884 }, 00:10:05.884 { 00:10:05.884 "name": "BaseBdev4", 00:10:05.884 "uuid": "0e347886-812e-4905-aec0-0f870eb3d7a6", 00:10:05.884 "is_configured": true, 00:10:05.884 "data_offset": 2048, 00:10:05.884 "data_size": 63488 00:10:05.885 } 00:10:05.885 ] 00:10:05.885 }' 00:10:05.885 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.885 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.144 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:06.144 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.144 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.144 [2024-10-15 01:11:18.768159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:06.144 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.144 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:06.144 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.144 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.144 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.144 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.144 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.144 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.144 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.144 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.144 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.144 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.144 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.144 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.145 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.145 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.145 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.145 "name": "Existed_Raid", 00:10:06.145 "uuid": "1f05795f-8685-4368-886d-130cc8e86952", 00:10:06.145 "strip_size_kb": 64, 00:10:06.145 "state": "configuring", 00:10:06.145 "raid_level": "concat", 00:10:06.145 "superblock": true, 00:10:06.145 "num_base_bdevs": 4, 00:10:06.145 "num_base_bdevs_discovered": 2, 00:10:06.145 "num_base_bdevs_operational": 4, 00:10:06.145 "base_bdevs_list": [ 00:10:06.145 { 00:10:06.145 "name": "BaseBdev1", 00:10:06.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.145 "is_configured": false, 00:10:06.145 "data_offset": 0, 00:10:06.145 "data_size": 0 00:10:06.145 }, 00:10:06.145 { 00:10:06.145 "name": null, 00:10:06.145 "uuid": "5b0dc1b9-5271-4b3c-a334-3f441495c3af", 00:10:06.145 "is_configured": false, 00:10:06.145 "data_offset": 0, 00:10:06.145 "data_size": 63488 00:10:06.145 }, 00:10:06.145 { 00:10:06.145 "name": "BaseBdev3", 00:10:06.145 "uuid": "e643e0ba-e1cc-4593-a7d2-52ec2d27a6d4", 00:10:06.145 "is_configured": true, 00:10:06.145 "data_offset": 2048, 00:10:06.145 "data_size": 63488 00:10:06.145 }, 00:10:06.145 { 00:10:06.145 "name": "BaseBdev4", 00:10:06.145 "uuid": "0e347886-812e-4905-aec0-0f870eb3d7a6", 00:10:06.145 "is_configured": true, 00:10:06.145 "data_offset": 2048, 00:10:06.145 "data_size": 63488 00:10:06.145 } 00:10:06.145 ] 00:10:06.145 }' 00:10:06.145 01:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.145 01:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.714 [2024-10-15 01:11:19.250435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.714 BaseBdev1 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.714 [ 00:10:06.714 { 00:10:06.714 "name": "BaseBdev1", 00:10:06.714 "aliases": [ 00:10:06.714 "824b5f9d-7c49-4335-95c1-cbd9e5fbff96" 00:10:06.714 ], 00:10:06.714 "product_name": "Malloc disk", 00:10:06.714 "block_size": 512, 00:10:06.714 "num_blocks": 65536, 00:10:06.714 "uuid": "824b5f9d-7c49-4335-95c1-cbd9e5fbff96", 00:10:06.714 "assigned_rate_limits": { 00:10:06.714 "rw_ios_per_sec": 0, 00:10:06.714 "rw_mbytes_per_sec": 0, 00:10:06.714 "r_mbytes_per_sec": 0, 00:10:06.714 "w_mbytes_per_sec": 0 00:10:06.714 }, 00:10:06.714 "claimed": true, 00:10:06.714 "claim_type": "exclusive_write", 00:10:06.714 "zoned": false, 00:10:06.714 "supported_io_types": { 00:10:06.714 "read": true, 00:10:06.714 "write": true, 00:10:06.714 "unmap": true, 00:10:06.714 "flush": true, 00:10:06.714 "reset": true, 00:10:06.714 "nvme_admin": false, 00:10:06.714 "nvme_io": false, 00:10:06.714 "nvme_io_md": false, 00:10:06.714 "write_zeroes": true, 00:10:06.714 "zcopy": true, 00:10:06.714 "get_zone_info": false, 00:10:06.714 "zone_management": false, 00:10:06.714 "zone_append": false, 00:10:06.714 "compare": false, 00:10:06.714 "compare_and_write": false, 00:10:06.714 "abort": true, 00:10:06.714 "seek_hole": false, 00:10:06.714 "seek_data": false, 00:10:06.714 "copy": true, 00:10:06.714 "nvme_iov_md": false 00:10:06.714 }, 00:10:06.714 "memory_domains": [ 00:10:06.714 { 00:10:06.714 "dma_device_id": "system", 00:10:06.714 "dma_device_type": 1 00:10:06.714 }, 00:10:06.714 { 00:10:06.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.714 "dma_device_type": 2 00:10:06.714 } 00:10:06.714 ], 00:10:06.714 "driver_specific": {} 00:10:06.714 } 00:10:06.714 ] 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.714 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.715 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.715 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.715 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.715 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.715 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.715 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.715 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.715 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.715 "name": "Existed_Raid", 00:10:06.715 "uuid": "1f05795f-8685-4368-886d-130cc8e86952", 00:10:06.715 "strip_size_kb": 64, 00:10:06.715 "state": "configuring", 00:10:06.715 "raid_level": "concat", 00:10:06.715 "superblock": true, 00:10:06.715 "num_base_bdevs": 4, 00:10:06.715 "num_base_bdevs_discovered": 3, 00:10:06.715 "num_base_bdevs_operational": 4, 00:10:06.715 "base_bdevs_list": [ 00:10:06.715 { 00:10:06.715 "name": "BaseBdev1", 00:10:06.715 "uuid": "824b5f9d-7c49-4335-95c1-cbd9e5fbff96", 00:10:06.715 "is_configured": true, 00:10:06.715 "data_offset": 2048, 00:10:06.715 "data_size": 63488 00:10:06.715 }, 00:10:06.715 { 00:10:06.715 "name": null, 00:10:06.715 "uuid": "5b0dc1b9-5271-4b3c-a334-3f441495c3af", 00:10:06.715 "is_configured": false, 00:10:06.715 "data_offset": 0, 00:10:06.715 "data_size": 63488 00:10:06.715 }, 00:10:06.715 { 00:10:06.715 "name": "BaseBdev3", 00:10:06.715 "uuid": "e643e0ba-e1cc-4593-a7d2-52ec2d27a6d4", 00:10:06.715 "is_configured": true, 00:10:06.715 "data_offset": 2048, 00:10:06.715 "data_size": 63488 00:10:06.715 }, 00:10:06.715 { 00:10:06.715 "name": "BaseBdev4", 00:10:06.715 "uuid": "0e347886-812e-4905-aec0-0f870eb3d7a6", 00:10:06.715 "is_configured": true, 00:10:06.715 "data_offset": 2048, 00:10:06.715 "data_size": 63488 00:10:06.715 } 00:10:06.715 ] 00:10:06.715 }' 00:10:06.715 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.715 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.974 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.974 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.974 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.974 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.234 [2024-10-15 01:11:19.745660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.234 "name": "Existed_Raid", 00:10:07.234 "uuid": "1f05795f-8685-4368-886d-130cc8e86952", 00:10:07.234 "strip_size_kb": 64, 00:10:07.234 "state": "configuring", 00:10:07.234 "raid_level": "concat", 00:10:07.234 "superblock": true, 00:10:07.234 "num_base_bdevs": 4, 00:10:07.234 "num_base_bdevs_discovered": 2, 00:10:07.234 "num_base_bdevs_operational": 4, 00:10:07.234 "base_bdevs_list": [ 00:10:07.234 { 00:10:07.234 "name": "BaseBdev1", 00:10:07.234 "uuid": "824b5f9d-7c49-4335-95c1-cbd9e5fbff96", 00:10:07.234 "is_configured": true, 00:10:07.234 "data_offset": 2048, 00:10:07.234 "data_size": 63488 00:10:07.234 }, 00:10:07.234 { 00:10:07.234 "name": null, 00:10:07.234 "uuid": "5b0dc1b9-5271-4b3c-a334-3f441495c3af", 00:10:07.234 "is_configured": false, 00:10:07.234 "data_offset": 0, 00:10:07.234 "data_size": 63488 00:10:07.234 }, 00:10:07.234 { 00:10:07.234 "name": null, 00:10:07.234 "uuid": "e643e0ba-e1cc-4593-a7d2-52ec2d27a6d4", 00:10:07.234 "is_configured": false, 00:10:07.234 "data_offset": 0, 00:10:07.234 "data_size": 63488 00:10:07.234 }, 00:10:07.234 { 00:10:07.234 "name": "BaseBdev4", 00:10:07.234 "uuid": "0e347886-812e-4905-aec0-0f870eb3d7a6", 00:10:07.234 "is_configured": true, 00:10:07.234 "data_offset": 2048, 00:10:07.234 "data_size": 63488 00:10:07.234 } 00:10:07.234 ] 00:10:07.234 }' 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.234 01:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.494 [2024-10-15 01:11:20.196929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.494 01:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.754 01:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.754 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.754 "name": "Existed_Raid", 00:10:07.754 "uuid": "1f05795f-8685-4368-886d-130cc8e86952", 00:10:07.754 "strip_size_kb": 64, 00:10:07.754 "state": "configuring", 00:10:07.754 "raid_level": "concat", 00:10:07.754 "superblock": true, 00:10:07.754 "num_base_bdevs": 4, 00:10:07.754 "num_base_bdevs_discovered": 3, 00:10:07.754 "num_base_bdevs_operational": 4, 00:10:07.754 "base_bdevs_list": [ 00:10:07.754 { 00:10:07.754 "name": "BaseBdev1", 00:10:07.754 "uuid": "824b5f9d-7c49-4335-95c1-cbd9e5fbff96", 00:10:07.754 "is_configured": true, 00:10:07.754 "data_offset": 2048, 00:10:07.754 "data_size": 63488 00:10:07.755 }, 00:10:07.755 { 00:10:07.755 "name": null, 00:10:07.755 "uuid": "5b0dc1b9-5271-4b3c-a334-3f441495c3af", 00:10:07.755 "is_configured": false, 00:10:07.755 "data_offset": 0, 00:10:07.755 "data_size": 63488 00:10:07.755 }, 00:10:07.755 { 00:10:07.755 "name": "BaseBdev3", 00:10:07.755 "uuid": "e643e0ba-e1cc-4593-a7d2-52ec2d27a6d4", 00:10:07.755 "is_configured": true, 00:10:07.755 "data_offset": 2048, 00:10:07.755 "data_size": 63488 00:10:07.755 }, 00:10:07.755 { 00:10:07.755 "name": "BaseBdev4", 00:10:07.755 "uuid": "0e347886-812e-4905-aec0-0f870eb3d7a6", 00:10:07.755 "is_configured": true, 00:10:07.755 "data_offset": 2048, 00:10:07.755 "data_size": 63488 00:10:07.755 } 00:10:07.755 ] 00:10:07.755 }' 00:10:07.755 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.755 01:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.015 [2024-10-15 01:11:20.696091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.015 01:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.275 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.275 "name": "Existed_Raid", 00:10:08.275 "uuid": "1f05795f-8685-4368-886d-130cc8e86952", 00:10:08.275 "strip_size_kb": 64, 00:10:08.275 "state": "configuring", 00:10:08.275 "raid_level": "concat", 00:10:08.275 "superblock": true, 00:10:08.275 "num_base_bdevs": 4, 00:10:08.275 "num_base_bdevs_discovered": 2, 00:10:08.275 "num_base_bdevs_operational": 4, 00:10:08.275 "base_bdevs_list": [ 00:10:08.275 { 00:10:08.275 "name": null, 00:10:08.275 "uuid": "824b5f9d-7c49-4335-95c1-cbd9e5fbff96", 00:10:08.275 "is_configured": false, 00:10:08.275 "data_offset": 0, 00:10:08.275 "data_size": 63488 00:10:08.275 }, 00:10:08.275 { 00:10:08.275 "name": null, 00:10:08.275 "uuid": "5b0dc1b9-5271-4b3c-a334-3f441495c3af", 00:10:08.275 "is_configured": false, 00:10:08.275 "data_offset": 0, 00:10:08.275 "data_size": 63488 00:10:08.275 }, 00:10:08.275 { 00:10:08.275 "name": "BaseBdev3", 00:10:08.275 "uuid": "e643e0ba-e1cc-4593-a7d2-52ec2d27a6d4", 00:10:08.275 "is_configured": true, 00:10:08.275 "data_offset": 2048, 00:10:08.275 "data_size": 63488 00:10:08.275 }, 00:10:08.275 { 00:10:08.275 "name": "BaseBdev4", 00:10:08.275 "uuid": "0e347886-812e-4905-aec0-0f870eb3d7a6", 00:10:08.275 "is_configured": true, 00:10:08.275 "data_offset": 2048, 00:10:08.275 "data_size": 63488 00:10:08.275 } 00:10:08.275 ] 00:10:08.275 }' 00:10:08.275 01:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.275 01:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.535 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.535 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.535 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.535 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:08.535 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.535 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:08.535 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:08.535 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.535 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.536 [2024-10-15 01:11:21.141866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.536 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.536 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:08.536 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.536 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.536 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:08.536 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.536 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.536 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.536 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.536 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.536 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.536 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.536 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.536 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.536 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.536 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.536 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.536 "name": "Existed_Raid", 00:10:08.536 "uuid": "1f05795f-8685-4368-886d-130cc8e86952", 00:10:08.536 "strip_size_kb": 64, 00:10:08.536 "state": "configuring", 00:10:08.536 "raid_level": "concat", 00:10:08.536 "superblock": true, 00:10:08.536 "num_base_bdevs": 4, 00:10:08.536 "num_base_bdevs_discovered": 3, 00:10:08.536 "num_base_bdevs_operational": 4, 00:10:08.536 "base_bdevs_list": [ 00:10:08.536 { 00:10:08.536 "name": null, 00:10:08.536 "uuid": "824b5f9d-7c49-4335-95c1-cbd9e5fbff96", 00:10:08.536 "is_configured": false, 00:10:08.536 "data_offset": 0, 00:10:08.536 "data_size": 63488 00:10:08.536 }, 00:10:08.536 { 00:10:08.536 "name": "BaseBdev2", 00:10:08.536 "uuid": "5b0dc1b9-5271-4b3c-a334-3f441495c3af", 00:10:08.536 "is_configured": true, 00:10:08.536 "data_offset": 2048, 00:10:08.536 "data_size": 63488 00:10:08.536 }, 00:10:08.536 { 00:10:08.536 "name": "BaseBdev3", 00:10:08.536 "uuid": "e643e0ba-e1cc-4593-a7d2-52ec2d27a6d4", 00:10:08.536 "is_configured": true, 00:10:08.536 "data_offset": 2048, 00:10:08.536 "data_size": 63488 00:10:08.536 }, 00:10:08.536 { 00:10:08.536 "name": "BaseBdev4", 00:10:08.536 "uuid": "0e347886-812e-4905-aec0-0f870eb3d7a6", 00:10:08.536 "is_configured": true, 00:10:08.536 "data_offset": 2048, 00:10:08.536 "data_size": 63488 00:10:08.536 } 00:10:08.536 ] 00:10:08.536 }' 00:10:08.536 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.536 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 824b5f9d-7c49-4335-95c1-cbd9e5fbff96 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.107 NewBaseBdev 00:10:09.107 [2024-10-15 01:11:21.715972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:09.107 [2024-10-15 01:11:21.716157] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:09.107 [2024-10-15 01:11:21.716170] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:09.107 [2024-10-15 01:11:21.716460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:09.107 [2024-10-15 01:11:21.716589] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:09.107 [2024-10-15 01:11:21.716600] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:09.107 [2024-10-15 01:11:21.716703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.107 [ 00:10:09.107 { 00:10:09.107 "name": "NewBaseBdev", 00:10:09.107 "aliases": [ 00:10:09.107 "824b5f9d-7c49-4335-95c1-cbd9e5fbff96" 00:10:09.107 ], 00:10:09.107 "product_name": "Malloc disk", 00:10:09.107 "block_size": 512, 00:10:09.107 "num_blocks": 65536, 00:10:09.107 "uuid": "824b5f9d-7c49-4335-95c1-cbd9e5fbff96", 00:10:09.107 "assigned_rate_limits": { 00:10:09.107 "rw_ios_per_sec": 0, 00:10:09.107 "rw_mbytes_per_sec": 0, 00:10:09.107 "r_mbytes_per_sec": 0, 00:10:09.107 "w_mbytes_per_sec": 0 00:10:09.107 }, 00:10:09.107 "claimed": true, 00:10:09.107 "claim_type": "exclusive_write", 00:10:09.107 "zoned": false, 00:10:09.107 "supported_io_types": { 00:10:09.107 "read": true, 00:10:09.107 "write": true, 00:10:09.107 "unmap": true, 00:10:09.107 "flush": true, 00:10:09.107 "reset": true, 00:10:09.107 "nvme_admin": false, 00:10:09.107 "nvme_io": false, 00:10:09.107 "nvme_io_md": false, 00:10:09.107 "write_zeroes": true, 00:10:09.107 "zcopy": true, 00:10:09.107 "get_zone_info": false, 00:10:09.107 "zone_management": false, 00:10:09.107 "zone_append": false, 00:10:09.107 "compare": false, 00:10:09.107 "compare_and_write": false, 00:10:09.107 "abort": true, 00:10:09.107 "seek_hole": false, 00:10:09.107 "seek_data": false, 00:10:09.107 "copy": true, 00:10:09.107 "nvme_iov_md": false 00:10:09.107 }, 00:10:09.107 "memory_domains": [ 00:10:09.107 { 00:10:09.107 "dma_device_id": "system", 00:10:09.107 "dma_device_type": 1 00:10:09.107 }, 00:10:09.107 { 00:10:09.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.107 "dma_device_type": 2 00:10:09.107 } 00:10:09.107 ], 00:10:09.107 "driver_specific": {} 00:10:09.107 } 00:10:09.107 ] 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.107 "name": "Existed_Raid", 00:10:09.107 "uuid": "1f05795f-8685-4368-886d-130cc8e86952", 00:10:09.107 "strip_size_kb": 64, 00:10:09.107 "state": "online", 00:10:09.107 "raid_level": "concat", 00:10:09.107 "superblock": true, 00:10:09.107 "num_base_bdevs": 4, 00:10:09.107 "num_base_bdevs_discovered": 4, 00:10:09.107 "num_base_bdevs_operational": 4, 00:10:09.107 "base_bdevs_list": [ 00:10:09.107 { 00:10:09.107 "name": "NewBaseBdev", 00:10:09.107 "uuid": "824b5f9d-7c49-4335-95c1-cbd9e5fbff96", 00:10:09.107 "is_configured": true, 00:10:09.107 "data_offset": 2048, 00:10:09.107 "data_size": 63488 00:10:09.107 }, 00:10:09.107 { 00:10:09.107 "name": "BaseBdev2", 00:10:09.107 "uuid": "5b0dc1b9-5271-4b3c-a334-3f441495c3af", 00:10:09.107 "is_configured": true, 00:10:09.107 "data_offset": 2048, 00:10:09.107 "data_size": 63488 00:10:09.107 }, 00:10:09.107 { 00:10:09.107 "name": "BaseBdev3", 00:10:09.107 "uuid": "e643e0ba-e1cc-4593-a7d2-52ec2d27a6d4", 00:10:09.107 "is_configured": true, 00:10:09.107 "data_offset": 2048, 00:10:09.107 "data_size": 63488 00:10:09.107 }, 00:10:09.107 { 00:10:09.107 "name": "BaseBdev4", 00:10:09.107 "uuid": "0e347886-812e-4905-aec0-0f870eb3d7a6", 00:10:09.107 "is_configured": true, 00:10:09.107 "data_offset": 2048, 00:10:09.107 "data_size": 63488 00:10:09.107 } 00:10:09.107 ] 00:10:09.107 }' 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.107 01:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.678 [2024-10-15 01:11:22.167727] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:09.678 "name": "Existed_Raid", 00:10:09.678 "aliases": [ 00:10:09.678 "1f05795f-8685-4368-886d-130cc8e86952" 00:10:09.678 ], 00:10:09.678 "product_name": "Raid Volume", 00:10:09.678 "block_size": 512, 00:10:09.678 "num_blocks": 253952, 00:10:09.678 "uuid": "1f05795f-8685-4368-886d-130cc8e86952", 00:10:09.678 "assigned_rate_limits": { 00:10:09.678 "rw_ios_per_sec": 0, 00:10:09.678 "rw_mbytes_per_sec": 0, 00:10:09.678 "r_mbytes_per_sec": 0, 00:10:09.678 "w_mbytes_per_sec": 0 00:10:09.678 }, 00:10:09.678 "claimed": false, 00:10:09.678 "zoned": false, 00:10:09.678 "supported_io_types": { 00:10:09.678 "read": true, 00:10:09.678 "write": true, 00:10:09.678 "unmap": true, 00:10:09.678 "flush": true, 00:10:09.678 "reset": true, 00:10:09.678 "nvme_admin": false, 00:10:09.678 "nvme_io": false, 00:10:09.678 "nvme_io_md": false, 00:10:09.678 "write_zeroes": true, 00:10:09.678 "zcopy": false, 00:10:09.678 "get_zone_info": false, 00:10:09.678 "zone_management": false, 00:10:09.678 "zone_append": false, 00:10:09.678 "compare": false, 00:10:09.678 "compare_and_write": false, 00:10:09.678 "abort": false, 00:10:09.678 "seek_hole": false, 00:10:09.678 "seek_data": false, 00:10:09.678 "copy": false, 00:10:09.678 "nvme_iov_md": false 00:10:09.678 }, 00:10:09.678 "memory_domains": [ 00:10:09.678 { 00:10:09.678 "dma_device_id": "system", 00:10:09.678 "dma_device_type": 1 00:10:09.678 }, 00:10:09.678 { 00:10:09.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.678 "dma_device_type": 2 00:10:09.678 }, 00:10:09.678 { 00:10:09.678 "dma_device_id": "system", 00:10:09.678 "dma_device_type": 1 00:10:09.678 }, 00:10:09.678 { 00:10:09.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.678 "dma_device_type": 2 00:10:09.678 }, 00:10:09.678 { 00:10:09.678 "dma_device_id": "system", 00:10:09.678 "dma_device_type": 1 00:10:09.678 }, 00:10:09.678 { 00:10:09.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.678 "dma_device_type": 2 00:10:09.678 }, 00:10:09.678 { 00:10:09.678 "dma_device_id": "system", 00:10:09.678 "dma_device_type": 1 00:10:09.678 }, 00:10:09.678 { 00:10:09.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.678 "dma_device_type": 2 00:10:09.678 } 00:10:09.678 ], 00:10:09.678 "driver_specific": { 00:10:09.678 "raid": { 00:10:09.678 "uuid": "1f05795f-8685-4368-886d-130cc8e86952", 00:10:09.678 "strip_size_kb": 64, 00:10:09.678 "state": "online", 00:10:09.678 "raid_level": "concat", 00:10:09.678 "superblock": true, 00:10:09.678 "num_base_bdevs": 4, 00:10:09.678 "num_base_bdevs_discovered": 4, 00:10:09.678 "num_base_bdevs_operational": 4, 00:10:09.678 "base_bdevs_list": [ 00:10:09.678 { 00:10:09.678 "name": "NewBaseBdev", 00:10:09.678 "uuid": "824b5f9d-7c49-4335-95c1-cbd9e5fbff96", 00:10:09.678 "is_configured": true, 00:10:09.678 "data_offset": 2048, 00:10:09.678 "data_size": 63488 00:10:09.678 }, 00:10:09.678 { 00:10:09.678 "name": "BaseBdev2", 00:10:09.678 "uuid": "5b0dc1b9-5271-4b3c-a334-3f441495c3af", 00:10:09.678 "is_configured": true, 00:10:09.678 "data_offset": 2048, 00:10:09.678 "data_size": 63488 00:10:09.678 }, 00:10:09.678 { 00:10:09.678 "name": "BaseBdev3", 00:10:09.678 "uuid": "e643e0ba-e1cc-4593-a7d2-52ec2d27a6d4", 00:10:09.678 "is_configured": true, 00:10:09.678 "data_offset": 2048, 00:10:09.678 "data_size": 63488 00:10:09.678 }, 00:10:09.678 { 00:10:09.678 "name": "BaseBdev4", 00:10:09.678 "uuid": "0e347886-812e-4905-aec0-0f870eb3d7a6", 00:10:09.678 "is_configured": true, 00:10:09.678 "data_offset": 2048, 00:10:09.678 "data_size": 63488 00:10:09.678 } 00:10:09.678 ] 00:10:09.678 } 00:10:09.678 } 00:10:09.678 }' 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:09.678 BaseBdev2 00:10:09.678 BaseBdev3 00:10:09.678 BaseBdev4' 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.678 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.939 [2024-10-15 01:11:22.486823] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.939 [2024-10-15 01:11:22.486851] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:09.939 [2024-10-15 01:11:22.486939] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:09.939 [2024-10-15 01:11:22.487007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:09.939 [2024-10-15 01:11:22.487017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82555 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82555 ']' 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 82555 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82555 00:10:09.939 killing process with pid 82555 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82555' 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 82555 00:10:09.939 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 82555 00:10:09.939 [2024-10-15 01:11:22.516912] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:09.939 [2024-10-15 01:11:22.558088] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:10.207 01:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:10.207 00:10:10.207 real 0m9.234s 00:10:10.207 user 0m15.798s 00:10:10.207 sys 0m1.856s 00:10:10.207 ************************************ 00:10:10.207 END TEST raid_state_function_test_sb 00:10:10.207 ************************************ 00:10:10.207 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.207 01:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.207 01:11:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:10.207 01:11:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:10.207 01:11:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.207 01:11:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:10.207 ************************************ 00:10:10.207 START TEST raid_superblock_test 00:10:10.207 ************************************ 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:10.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83203 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83203 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 83203 ']' 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:10.207 01:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.207 [2024-10-15 01:11:22.919727] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:10:10.207 [2024-10-15 01:11:22.919968] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83203 ] 00:10:10.478 [2024-10-15 01:11:23.064627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.478 [2024-10-15 01:11:23.091512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.478 [2024-10-15 01:11:23.135165] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.478 [2024-10-15 01:11:23.135289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.048 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:11.048 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:11.048 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:11.048 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:11.048 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:11.048 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:11.048 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:11.048 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:11.048 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:11.048 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:11.048 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:11.048 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.048 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.308 malloc1 00:10:11.308 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.308 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:11.308 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.309 [2024-10-15 01:11:23.782581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:11.309 [2024-10-15 01:11:23.782721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.309 [2024-10-15 01:11:23.782757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:11.309 [2024-10-15 01:11:23.782771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.309 [2024-10-15 01:11:23.785264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.309 [2024-10-15 01:11:23.785306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:11.309 pt1 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.309 malloc2 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.309 [2024-10-15 01:11:23.811619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:11.309 [2024-10-15 01:11:23.811750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.309 [2024-10-15 01:11:23.811789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:11.309 [2024-10-15 01:11:23.811836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.309 [2024-10-15 01:11:23.814111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.309 [2024-10-15 01:11:23.814192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:11.309 pt2 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.309 malloc3 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.309 [2024-10-15 01:11:23.844555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:11.309 [2024-10-15 01:11:23.844658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.309 [2024-10-15 01:11:23.844698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:11.309 [2024-10-15 01:11:23.844735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.309 [2024-10-15 01:11:23.846996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.309 [2024-10-15 01:11:23.847068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:11.309 pt3 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.309 malloc4 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.309 [2024-10-15 01:11:23.886849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:11.309 [2024-10-15 01:11:23.886909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.309 [2024-10-15 01:11:23.886927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:11.309 [2024-10-15 01:11:23.886942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.309 [2024-10-15 01:11:23.889345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.309 [2024-10-15 01:11:23.889380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:11.309 pt4 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.309 [2024-10-15 01:11:23.898849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:11.309 [2024-10-15 01:11:23.900885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:11.309 [2024-10-15 01:11:23.900953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:11.309 [2024-10-15 01:11:23.901008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:11.309 [2024-10-15 01:11:23.901159] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:11.309 [2024-10-15 01:11:23.901171] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:11.309 [2024-10-15 01:11:23.901452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:11.309 [2024-10-15 01:11:23.901596] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:11.309 [2024-10-15 01:11:23.901613] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:11.309 [2024-10-15 01:11:23.901747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.309 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.309 "name": "raid_bdev1", 00:10:11.309 "uuid": "67ee4c2b-284e-4174-9477-0e2f1b9d8234", 00:10:11.309 "strip_size_kb": 64, 00:10:11.309 "state": "online", 00:10:11.309 "raid_level": "concat", 00:10:11.309 "superblock": true, 00:10:11.309 "num_base_bdevs": 4, 00:10:11.309 "num_base_bdevs_discovered": 4, 00:10:11.309 "num_base_bdevs_operational": 4, 00:10:11.309 "base_bdevs_list": [ 00:10:11.309 { 00:10:11.309 "name": "pt1", 00:10:11.309 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.309 "is_configured": true, 00:10:11.309 "data_offset": 2048, 00:10:11.309 "data_size": 63488 00:10:11.309 }, 00:10:11.309 { 00:10:11.309 "name": "pt2", 00:10:11.309 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.310 "is_configured": true, 00:10:11.310 "data_offset": 2048, 00:10:11.310 "data_size": 63488 00:10:11.310 }, 00:10:11.310 { 00:10:11.310 "name": "pt3", 00:10:11.310 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.310 "is_configured": true, 00:10:11.310 "data_offset": 2048, 00:10:11.310 "data_size": 63488 00:10:11.310 }, 00:10:11.310 { 00:10:11.310 "name": "pt4", 00:10:11.310 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:11.310 "is_configured": true, 00:10:11.310 "data_offset": 2048, 00:10:11.310 "data_size": 63488 00:10:11.310 } 00:10:11.310 ] 00:10:11.310 }' 00:10:11.310 01:11:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.310 01:11:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.879 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:11.879 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:11.879 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.879 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.879 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.879 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.879 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:11.879 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.879 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.879 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.879 [2024-10-15 01:11:24.318507] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.879 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.879 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.879 "name": "raid_bdev1", 00:10:11.879 "aliases": [ 00:10:11.879 "67ee4c2b-284e-4174-9477-0e2f1b9d8234" 00:10:11.879 ], 00:10:11.879 "product_name": "Raid Volume", 00:10:11.879 "block_size": 512, 00:10:11.879 "num_blocks": 253952, 00:10:11.879 "uuid": "67ee4c2b-284e-4174-9477-0e2f1b9d8234", 00:10:11.879 "assigned_rate_limits": { 00:10:11.879 "rw_ios_per_sec": 0, 00:10:11.879 "rw_mbytes_per_sec": 0, 00:10:11.879 "r_mbytes_per_sec": 0, 00:10:11.879 "w_mbytes_per_sec": 0 00:10:11.879 }, 00:10:11.879 "claimed": false, 00:10:11.879 "zoned": false, 00:10:11.879 "supported_io_types": { 00:10:11.879 "read": true, 00:10:11.879 "write": true, 00:10:11.879 "unmap": true, 00:10:11.879 "flush": true, 00:10:11.879 "reset": true, 00:10:11.879 "nvme_admin": false, 00:10:11.879 "nvme_io": false, 00:10:11.879 "nvme_io_md": false, 00:10:11.879 "write_zeroes": true, 00:10:11.879 "zcopy": false, 00:10:11.879 "get_zone_info": false, 00:10:11.879 "zone_management": false, 00:10:11.879 "zone_append": false, 00:10:11.879 "compare": false, 00:10:11.879 "compare_and_write": false, 00:10:11.879 "abort": false, 00:10:11.879 "seek_hole": false, 00:10:11.879 "seek_data": false, 00:10:11.879 "copy": false, 00:10:11.879 "nvme_iov_md": false 00:10:11.879 }, 00:10:11.879 "memory_domains": [ 00:10:11.879 { 00:10:11.879 "dma_device_id": "system", 00:10:11.879 "dma_device_type": 1 00:10:11.879 }, 00:10:11.879 { 00:10:11.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.879 "dma_device_type": 2 00:10:11.879 }, 00:10:11.879 { 00:10:11.879 "dma_device_id": "system", 00:10:11.879 "dma_device_type": 1 00:10:11.879 }, 00:10:11.879 { 00:10:11.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.879 "dma_device_type": 2 00:10:11.879 }, 00:10:11.879 { 00:10:11.879 "dma_device_id": "system", 00:10:11.879 "dma_device_type": 1 00:10:11.879 }, 00:10:11.879 { 00:10:11.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.879 "dma_device_type": 2 00:10:11.879 }, 00:10:11.879 { 00:10:11.879 "dma_device_id": "system", 00:10:11.879 "dma_device_type": 1 00:10:11.879 }, 00:10:11.879 { 00:10:11.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.879 "dma_device_type": 2 00:10:11.879 } 00:10:11.879 ], 00:10:11.879 "driver_specific": { 00:10:11.879 "raid": { 00:10:11.879 "uuid": "67ee4c2b-284e-4174-9477-0e2f1b9d8234", 00:10:11.879 "strip_size_kb": 64, 00:10:11.879 "state": "online", 00:10:11.879 "raid_level": "concat", 00:10:11.879 "superblock": true, 00:10:11.879 "num_base_bdevs": 4, 00:10:11.879 "num_base_bdevs_discovered": 4, 00:10:11.879 "num_base_bdevs_operational": 4, 00:10:11.879 "base_bdevs_list": [ 00:10:11.879 { 00:10:11.879 "name": "pt1", 00:10:11.879 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.879 "is_configured": true, 00:10:11.879 "data_offset": 2048, 00:10:11.879 "data_size": 63488 00:10:11.879 }, 00:10:11.879 { 00:10:11.879 "name": "pt2", 00:10:11.879 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.879 "is_configured": true, 00:10:11.880 "data_offset": 2048, 00:10:11.880 "data_size": 63488 00:10:11.880 }, 00:10:11.880 { 00:10:11.880 "name": "pt3", 00:10:11.880 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.880 "is_configured": true, 00:10:11.880 "data_offset": 2048, 00:10:11.880 "data_size": 63488 00:10:11.880 }, 00:10:11.880 { 00:10:11.880 "name": "pt4", 00:10:11.880 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:11.880 "is_configured": true, 00:10:11.880 "data_offset": 2048, 00:10:11.880 "data_size": 63488 00:10:11.880 } 00:10:11.880 ] 00:10:11.880 } 00:10:11.880 } 00:10:11.880 }' 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:11.880 pt2 00:10:11.880 pt3 00:10:11.880 pt4' 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.880 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:12.141 [2024-10-15 01:11:24.665829] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=67ee4c2b-284e-4174-9477-0e2f1b9d8234 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 67ee4c2b-284e-4174-9477-0e2f1b9d8234 ']' 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.141 [2024-10-15 01:11:24.717448] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:12.141 [2024-10-15 01:11:24.717524] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:12.141 [2024-10-15 01:11:24.717661] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.141 [2024-10-15 01:11:24.717765] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:12.141 [2024-10-15 01:11:24.717822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.141 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.402 [2024-10-15 01:11:24.877241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:12.402 [2024-10-15 01:11:24.879327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:12.402 [2024-10-15 01:11:24.879377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:12.402 [2024-10-15 01:11:24.879407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:12.402 [2024-10-15 01:11:24.879455] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:12.402 [2024-10-15 01:11:24.879502] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:12.402 [2024-10-15 01:11:24.879521] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:12.402 [2024-10-15 01:11:24.879537] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:12.402 [2024-10-15 01:11:24.879551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:12.402 [2024-10-15 01:11:24.879561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:10:12.402 request: 00:10:12.402 { 00:10:12.402 "name": "raid_bdev1", 00:10:12.402 "raid_level": "concat", 00:10:12.402 "base_bdevs": [ 00:10:12.402 "malloc1", 00:10:12.402 "malloc2", 00:10:12.402 "malloc3", 00:10:12.402 "malloc4" 00:10:12.402 ], 00:10:12.402 "strip_size_kb": 64, 00:10:12.402 "superblock": false, 00:10:12.402 "method": "bdev_raid_create", 00:10:12.402 "req_id": 1 00:10:12.402 } 00:10:12.402 Got JSON-RPC error response 00:10:12.402 response: 00:10:12.402 { 00:10:12.402 "code": -17, 00:10:12.402 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:12.402 } 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.402 [2024-10-15 01:11:24.941053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:12.402 [2024-10-15 01:11:24.941159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.402 [2024-10-15 01:11:24.941236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:12.402 [2024-10-15 01:11:24.941276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.402 [2024-10-15 01:11:24.943568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.402 [2024-10-15 01:11:24.943638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:12.402 [2024-10-15 01:11:24.943767] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:12.402 [2024-10-15 01:11:24.943842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:12.402 pt1 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.402 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.403 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.403 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.403 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.403 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.403 "name": "raid_bdev1", 00:10:12.403 "uuid": "67ee4c2b-284e-4174-9477-0e2f1b9d8234", 00:10:12.403 "strip_size_kb": 64, 00:10:12.403 "state": "configuring", 00:10:12.403 "raid_level": "concat", 00:10:12.403 "superblock": true, 00:10:12.403 "num_base_bdevs": 4, 00:10:12.403 "num_base_bdevs_discovered": 1, 00:10:12.403 "num_base_bdevs_operational": 4, 00:10:12.403 "base_bdevs_list": [ 00:10:12.403 { 00:10:12.403 "name": "pt1", 00:10:12.403 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.403 "is_configured": true, 00:10:12.403 "data_offset": 2048, 00:10:12.403 "data_size": 63488 00:10:12.403 }, 00:10:12.403 { 00:10:12.403 "name": null, 00:10:12.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.403 "is_configured": false, 00:10:12.403 "data_offset": 2048, 00:10:12.403 "data_size": 63488 00:10:12.403 }, 00:10:12.403 { 00:10:12.403 "name": null, 00:10:12.403 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.403 "is_configured": false, 00:10:12.403 "data_offset": 2048, 00:10:12.403 "data_size": 63488 00:10:12.403 }, 00:10:12.403 { 00:10:12.403 "name": null, 00:10:12.403 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:12.403 "is_configured": false, 00:10:12.403 "data_offset": 2048, 00:10:12.403 "data_size": 63488 00:10:12.403 } 00:10:12.403 ] 00:10:12.403 }' 00:10:12.403 01:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.403 01:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.663 [2024-10-15 01:11:25.316431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:12.663 [2024-10-15 01:11:25.316553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.663 [2024-10-15 01:11:25.316593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:12.663 [2024-10-15 01:11:25.316630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.663 [2024-10-15 01:11:25.317096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.663 [2024-10-15 01:11:25.317159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:12.663 [2024-10-15 01:11:25.317298] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:12.663 [2024-10-15 01:11:25.317354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:12.663 pt2 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.663 [2024-10-15 01:11:25.328436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.663 "name": "raid_bdev1", 00:10:12.663 "uuid": "67ee4c2b-284e-4174-9477-0e2f1b9d8234", 00:10:12.663 "strip_size_kb": 64, 00:10:12.663 "state": "configuring", 00:10:12.663 "raid_level": "concat", 00:10:12.663 "superblock": true, 00:10:12.663 "num_base_bdevs": 4, 00:10:12.663 "num_base_bdevs_discovered": 1, 00:10:12.663 "num_base_bdevs_operational": 4, 00:10:12.663 "base_bdevs_list": [ 00:10:12.663 { 00:10:12.663 "name": "pt1", 00:10:12.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.663 "is_configured": true, 00:10:12.663 "data_offset": 2048, 00:10:12.663 "data_size": 63488 00:10:12.663 }, 00:10:12.663 { 00:10:12.663 "name": null, 00:10:12.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.663 "is_configured": false, 00:10:12.663 "data_offset": 0, 00:10:12.663 "data_size": 63488 00:10:12.663 }, 00:10:12.663 { 00:10:12.663 "name": null, 00:10:12.663 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.663 "is_configured": false, 00:10:12.663 "data_offset": 2048, 00:10:12.663 "data_size": 63488 00:10:12.663 }, 00:10:12.663 { 00:10:12.663 "name": null, 00:10:12.663 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:12.663 "is_configured": false, 00:10:12.663 "data_offset": 2048, 00:10:12.663 "data_size": 63488 00:10:12.663 } 00:10:12.663 ] 00:10:12.663 }' 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.663 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.235 [2024-10-15 01:11:25.743819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:13.235 [2024-10-15 01:11:25.743938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.235 [2024-10-15 01:11:25.743973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:13.235 [2024-10-15 01:11:25.744008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.235 [2024-10-15 01:11:25.744489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.235 [2024-10-15 01:11:25.744561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:13.235 [2024-10-15 01:11:25.744676] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:13.235 [2024-10-15 01:11:25.744733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:13.235 pt2 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.235 [2024-10-15 01:11:25.755731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:13.235 [2024-10-15 01:11:25.755819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.235 [2024-10-15 01:11:25.755853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:13.235 [2024-10-15 01:11:25.755887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.235 [2024-10-15 01:11:25.756286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.235 [2024-10-15 01:11:25.756347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:13.235 [2024-10-15 01:11:25.756441] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:13.235 [2024-10-15 01:11:25.756494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:13.235 pt3 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.235 [2024-10-15 01:11:25.767737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:13.235 [2024-10-15 01:11:25.767783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.235 [2024-10-15 01:11:25.767796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:13.235 [2024-10-15 01:11:25.767805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.235 [2024-10-15 01:11:25.768094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.235 [2024-10-15 01:11:25.768113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:13.235 [2024-10-15 01:11:25.768165] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:13.235 [2024-10-15 01:11:25.768204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:13.235 [2024-10-15 01:11:25.768306] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:13.235 [2024-10-15 01:11:25.768317] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:13.235 [2024-10-15 01:11:25.768553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:13.235 [2024-10-15 01:11:25.768697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:13.235 [2024-10-15 01:11:25.768706] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:10:13.235 [2024-10-15 01:11:25.768811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.235 pt4 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.235 "name": "raid_bdev1", 00:10:13.235 "uuid": "67ee4c2b-284e-4174-9477-0e2f1b9d8234", 00:10:13.235 "strip_size_kb": 64, 00:10:13.235 "state": "online", 00:10:13.235 "raid_level": "concat", 00:10:13.235 "superblock": true, 00:10:13.235 "num_base_bdevs": 4, 00:10:13.235 "num_base_bdevs_discovered": 4, 00:10:13.235 "num_base_bdevs_operational": 4, 00:10:13.235 "base_bdevs_list": [ 00:10:13.235 { 00:10:13.235 "name": "pt1", 00:10:13.235 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.235 "is_configured": true, 00:10:13.235 "data_offset": 2048, 00:10:13.235 "data_size": 63488 00:10:13.235 }, 00:10:13.235 { 00:10:13.235 "name": "pt2", 00:10:13.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.235 "is_configured": true, 00:10:13.235 "data_offset": 2048, 00:10:13.235 "data_size": 63488 00:10:13.235 }, 00:10:13.235 { 00:10:13.235 "name": "pt3", 00:10:13.235 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.235 "is_configured": true, 00:10:13.235 "data_offset": 2048, 00:10:13.235 "data_size": 63488 00:10:13.235 }, 00:10:13.235 { 00:10:13.235 "name": "pt4", 00:10:13.235 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:13.235 "is_configured": true, 00:10:13.235 "data_offset": 2048, 00:10:13.235 "data_size": 63488 00:10:13.235 } 00:10:13.235 ] 00:10:13.235 }' 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.235 01:11:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.495 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:13.495 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:13.495 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.495 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.495 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.495 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.495 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:13.495 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.495 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.756 [2024-10-15 01:11:26.227353] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.756 "name": "raid_bdev1", 00:10:13.756 "aliases": [ 00:10:13.756 "67ee4c2b-284e-4174-9477-0e2f1b9d8234" 00:10:13.756 ], 00:10:13.756 "product_name": "Raid Volume", 00:10:13.756 "block_size": 512, 00:10:13.756 "num_blocks": 253952, 00:10:13.756 "uuid": "67ee4c2b-284e-4174-9477-0e2f1b9d8234", 00:10:13.756 "assigned_rate_limits": { 00:10:13.756 "rw_ios_per_sec": 0, 00:10:13.756 "rw_mbytes_per_sec": 0, 00:10:13.756 "r_mbytes_per_sec": 0, 00:10:13.756 "w_mbytes_per_sec": 0 00:10:13.756 }, 00:10:13.756 "claimed": false, 00:10:13.756 "zoned": false, 00:10:13.756 "supported_io_types": { 00:10:13.756 "read": true, 00:10:13.756 "write": true, 00:10:13.756 "unmap": true, 00:10:13.756 "flush": true, 00:10:13.756 "reset": true, 00:10:13.756 "nvme_admin": false, 00:10:13.756 "nvme_io": false, 00:10:13.756 "nvme_io_md": false, 00:10:13.756 "write_zeroes": true, 00:10:13.756 "zcopy": false, 00:10:13.756 "get_zone_info": false, 00:10:13.756 "zone_management": false, 00:10:13.756 "zone_append": false, 00:10:13.756 "compare": false, 00:10:13.756 "compare_and_write": false, 00:10:13.756 "abort": false, 00:10:13.756 "seek_hole": false, 00:10:13.756 "seek_data": false, 00:10:13.756 "copy": false, 00:10:13.756 "nvme_iov_md": false 00:10:13.756 }, 00:10:13.756 "memory_domains": [ 00:10:13.756 { 00:10:13.756 "dma_device_id": "system", 00:10:13.756 "dma_device_type": 1 00:10:13.756 }, 00:10:13.756 { 00:10:13.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.756 "dma_device_type": 2 00:10:13.756 }, 00:10:13.756 { 00:10:13.756 "dma_device_id": "system", 00:10:13.756 "dma_device_type": 1 00:10:13.756 }, 00:10:13.756 { 00:10:13.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.756 "dma_device_type": 2 00:10:13.756 }, 00:10:13.756 { 00:10:13.756 "dma_device_id": "system", 00:10:13.756 "dma_device_type": 1 00:10:13.756 }, 00:10:13.756 { 00:10:13.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.756 "dma_device_type": 2 00:10:13.756 }, 00:10:13.756 { 00:10:13.756 "dma_device_id": "system", 00:10:13.756 "dma_device_type": 1 00:10:13.756 }, 00:10:13.756 { 00:10:13.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.756 "dma_device_type": 2 00:10:13.756 } 00:10:13.756 ], 00:10:13.756 "driver_specific": { 00:10:13.756 "raid": { 00:10:13.756 "uuid": "67ee4c2b-284e-4174-9477-0e2f1b9d8234", 00:10:13.756 "strip_size_kb": 64, 00:10:13.756 "state": "online", 00:10:13.756 "raid_level": "concat", 00:10:13.756 "superblock": true, 00:10:13.756 "num_base_bdevs": 4, 00:10:13.756 "num_base_bdevs_discovered": 4, 00:10:13.756 "num_base_bdevs_operational": 4, 00:10:13.756 "base_bdevs_list": [ 00:10:13.756 { 00:10:13.756 "name": "pt1", 00:10:13.756 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.756 "is_configured": true, 00:10:13.756 "data_offset": 2048, 00:10:13.756 "data_size": 63488 00:10:13.756 }, 00:10:13.756 { 00:10:13.756 "name": "pt2", 00:10:13.756 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.756 "is_configured": true, 00:10:13.756 "data_offset": 2048, 00:10:13.756 "data_size": 63488 00:10:13.756 }, 00:10:13.756 { 00:10:13.756 "name": "pt3", 00:10:13.756 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.756 "is_configured": true, 00:10:13.756 "data_offset": 2048, 00:10:13.756 "data_size": 63488 00:10:13.756 }, 00:10:13.756 { 00:10:13.756 "name": "pt4", 00:10:13.756 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:13.756 "is_configured": true, 00:10:13.756 "data_offset": 2048, 00:10:13.756 "data_size": 63488 00:10:13.756 } 00:10:13.756 ] 00:10:13.756 } 00:10:13.756 } 00:10:13.756 }' 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:13.756 pt2 00:10:13.756 pt3 00:10:13.756 pt4' 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.756 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.016 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.016 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.016 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.016 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.016 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.016 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:14.016 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.016 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.016 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.016 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.016 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.017 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:14.017 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:14.017 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.017 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.017 [2024-10-15 01:11:26.550811] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.017 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.017 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 67ee4c2b-284e-4174-9477-0e2f1b9d8234 '!=' 67ee4c2b-284e-4174-9477-0e2f1b9d8234 ']' 00:10:14.017 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:14.017 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:14.017 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:14.017 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83203 00:10:14.017 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 83203 ']' 00:10:14.017 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 83203 00:10:14.017 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:14.017 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:14.017 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83203 00:10:14.017 killing process with pid 83203 00:10:14.017 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:14.017 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:14.017 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83203' 00:10:14.017 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 83203 00:10:14.017 [2024-10-15 01:11:26.637354] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:14.017 [2024-10-15 01:11:26.637475] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.017 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 83203 00:10:14.017 [2024-10-15 01:11:26.637544] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.017 [2024-10-15 01:11:26.637556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:10:14.017 [2024-10-15 01:11:26.681871] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:14.277 01:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:14.277 00:10:14.277 real 0m4.060s 00:10:14.277 user 0m6.407s 00:10:14.277 sys 0m0.909s 00:10:14.277 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.277 ************************************ 00:10:14.277 END TEST raid_superblock_test 00:10:14.277 ************************************ 00:10:14.277 01:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.277 01:11:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:14.277 01:11:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:14.277 01:11:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.277 01:11:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:14.277 ************************************ 00:10:14.277 START TEST raid_read_error_test 00:10:14.277 ************************************ 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.r9DEuqo3pn 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83451 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83451 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 83451 ']' 00:10:14.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:14.277 01:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.537 [2024-10-15 01:11:27.063112] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:10:14.537 [2024-10-15 01:11:27.063269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83451 ] 00:10:14.537 [2024-10-15 01:11:27.207846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.537 [2024-10-15 01:11:27.234270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.797 [2024-10-15 01:11:27.277547] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.797 [2024-10-15 01:11:27.277583] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.367 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:15.367 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:15.367 01:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.367 01:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:15.367 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.368 BaseBdev1_malloc 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.368 true 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.368 [2024-10-15 01:11:27.932814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:15.368 [2024-10-15 01:11:27.932868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.368 [2024-10-15 01:11:27.932890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:15.368 [2024-10-15 01:11:27.932899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.368 [2024-10-15 01:11:27.935151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.368 [2024-10-15 01:11:27.935199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:15.368 BaseBdev1 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.368 BaseBdev2_malloc 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.368 true 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.368 [2024-10-15 01:11:27.973715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:15.368 [2024-10-15 01:11:27.973766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.368 [2024-10-15 01:11:27.973800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:15.368 [2024-10-15 01:11:27.973818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.368 [2024-10-15 01:11:27.976120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.368 [2024-10-15 01:11:27.976224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:15.368 BaseBdev2 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.368 BaseBdev3_malloc 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.368 01:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.368 true 00:10:15.368 01:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.368 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:15.368 01:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.368 01:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.368 [2024-10-15 01:11:28.014548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:15.368 [2024-10-15 01:11:28.014651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.368 [2024-10-15 01:11:28.014681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:15.368 [2024-10-15 01:11:28.014691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.368 [2024-10-15 01:11:28.017015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.368 [2024-10-15 01:11:28.017051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:15.368 BaseBdev3 00:10:15.368 01:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.368 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.368 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:15.368 01:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.368 01:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.368 BaseBdev4_malloc 00:10:15.368 01:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.368 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:15.368 01:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.368 01:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.368 true 00:10:15.368 01:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.368 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:15.368 01:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.368 01:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.368 [2024-10-15 01:11:28.066674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:15.368 [2024-10-15 01:11:28.066762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.368 [2024-10-15 01:11:28.066806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:15.368 [2024-10-15 01:11:28.066815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.368 [2024-10-15 01:11:28.069145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.368 [2024-10-15 01:11:28.069195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:15.368 BaseBdev4 00:10:15.368 01:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.368 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:15.368 01:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.368 01:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.368 [2024-10-15 01:11:28.078719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.368 [2024-10-15 01:11:28.080626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.368 [2024-10-15 01:11:28.080711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.368 [2024-10-15 01:11:28.080781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:15.368 [2024-10-15 01:11:28.081000] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:15.368 [2024-10-15 01:11:28.081012] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:15.369 [2024-10-15 01:11:28.081297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:15.369 [2024-10-15 01:11:28.081432] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:15.369 [2024-10-15 01:11:28.081451] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:15.369 [2024-10-15 01:11:28.081579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.369 01:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.369 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:15.369 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.369 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.369 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.369 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.369 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.369 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.369 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.369 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.369 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.628 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.628 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.628 01:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.628 01:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.628 01:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.628 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.628 "name": "raid_bdev1", 00:10:15.628 "uuid": "20f4e0cd-a1d1-4d95-805e-d1d0c7e27966", 00:10:15.628 "strip_size_kb": 64, 00:10:15.628 "state": "online", 00:10:15.628 "raid_level": "concat", 00:10:15.628 "superblock": true, 00:10:15.628 "num_base_bdevs": 4, 00:10:15.628 "num_base_bdevs_discovered": 4, 00:10:15.628 "num_base_bdevs_operational": 4, 00:10:15.628 "base_bdevs_list": [ 00:10:15.628 { 00:10:15.628 "name": "BaseBdev1", 00:10:15.628 "uuid": "4e277422-a8e0-5b13-9260-65bc5a282219", 00:10:15.628 "is_configured": true, 00:10:15.628 "data_offset": 2048, 00:10:15.628 "data_size": 63488 00:10:15.628 }, 00:10:15.628 { 00:10:15.628 "name": "BaseBdev2", 00:10:15.628 "uuid": "61a08c06-b5f9-523f-a38f-c668db832c93", 00:10:15.628 "is_configured": true, 00:10:15.628 "data_offset": 2048, 00:10:15.628 "data_size": 63488 00:10:15.628 }, 00:10:15.628 { 00:10:15.628 "name": "BaseBdev3", 00:10:15.628 "uuid": "87ddb37f-5ae1-5a3f-a7c6-0b9059026dee", 00:10:15.628 "is_configured": true, 00:10:15.628 "data_offset": 2048, 00:10:15.628 "data_size": 63488 00:10:15.628 }, 00:10:15.628 { 00:10:15.628 "name": "BaseBdev4", 00:10:15.628 "uuid": "500ff731-013b-54e3-8dd5-5437d101f128", 00:10:15.628 "is_configured": true, 00:10:15.628 "data_offset": 2048, 00:10:15.628 "data_size": 63488 00:10:15.628 } 00:10:15.628 ] 00:10:15.628 }' 00:10:15.628 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.628 01:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.888 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:15.888 01:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:15.888 [2024-10-15 01:11:28.606227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:16.827 01:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:16.827 01:11:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.827 01:11:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.827 01:11:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.827 01:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:16.827 01:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:16.827 01:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:16.827 01:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:16.827 01:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.827 01:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.827 01:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.827 01:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.827 01:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.827 01:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.827 01:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.827 01:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.827 01:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.827 01:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.827 01:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.827 01:11:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.827 01:11:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.087 01:11:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.087 01:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.087 "name": "raid_bdev1", 00:10:17.087 "uuid": "20f4e0cd-a1d1-4d95-805e-d1d0c7e27966", 00:10:17.087 "strip_size_kb": 64, 00:10:17.087 "state": "online", 00:10:17.087 "raid_level": "concat", 00:10:17.087 "superblock": true, 00:10:17.087 "num_base_bdevs": 4, 00:10:17.087 "num_base_bdevs_discovered": 4, 00:10:17.087 "num_base_bdevs_operational": 4, 00:10:17.087 "base_bdevs_list": [ 00:10:17.087 { 00:10:17.087 "name": "BaseBdev1", 00:10:17.087 "uuid": "4e277422-a8e0-5b13-9260-65bc5a282219", 00:10:17.087 "is_configured": true, 00:10:17.087 "data_offset": 2048, 00:10:17.087 "data_size": 63488 00:10:17.087 }, 00:10:17.087 { 00:10:17.087 "name": "BaseBdev2", 00:10:17.087 "uuid": "61a08c06-b5f9-523f-a38f-c668db832c93", 00:10:17.087 "is_configured": true, 00:10:17.087 "data_offset": 2048, 00:10:17.087 "data_size": 63488 00:10:17.087 }, 00:10:17.087 { 00:10:17.087 "name": "BaseBdev3", 00:10:17.087 "uuid": "87ddb37f-5ae1-5a3f-a7c6-0b9059026dee", 00:10:17.087 "is_configured": true, 00:10:17.087 "data_offset": 2048, 00:10:17.087 "data_size": 63488 00:10:17.087 }, 00:10:17.087 { 00:10:17.087 "name": "BaseBdev4", 00:10:17.087 "uuid": "500ff731-013b-54e3-8dd5-5437d101f128", 00:10:17.087 "is_configured": true, 00:10:17.087 "data_offset": 2048, 00:10:17.087 "data_size": 63488 00:10:17.087 } 00:10:17.087 ] 00:10:17.087 }' 00:10:17.087 01:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.087 01:11:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.354 01:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:17.354 01:11:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.354 01:11:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.354 [2024-10-15 01:11:29.976818] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.354 [2024-10-15 01:11:29.976901] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.354 [2024-10-15 01:11:29.979497] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.354 [2024-10-15 01:11:29.979567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.354 [2024-10-15 01:11:29.979625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.354 [2024-10-15 01:11:29.979639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:17.354 { 00:10:17.354 "results": [ 00:10:17.354 { 00:10:17.354 "job": "raid_bdev1", 00:10:17.354 "core_mask": "0x1", 00:10:17.354 "workload": "randrw", 00:10:17.354 "percentage": 50, 00:10:17.354 "status": "finished", 00:10:17.354 "queue_depth": 1, 00:10:17.354 "io_size": 131072, 00:10:17.354 "runtime": 1.37127, 00:10:17.354 "iops": 16511.700832075374, 00:10:17.354 "mibps": 2063.9626040094217, 00:10:17.354 "io_failed": 1, 00:10:17.354 "io_timeout": 0, 00:10:17.354 "avg_latency_us": 83.94734985623636, 00:10:17.354 "min_latency_us": 26.047161572052403, 00:10:17.354 "max_latency_us": 1459.5353711790392 00:10:17.354 } 00:10:17.354 ], 00:10:17.354 "core_count": 1 00:10:17.354 } 00:10:17.354 01:11:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.354 01:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83451 00:10:17.354 01:11:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 83451 ']' 00:10:17.354 01:11:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 83451 00:10:17.354 01:11:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:17.354 01:11:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:17.354 01:11:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83451 00:10:17.354 killing process with pid 83451 00:10:17.354 01:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:17.354 01:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:17.354 01:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83451' 00:10:17.354 01:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 83451 00:10:17.354 [2024-10-15 01:11:30.026287] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:17.354 01:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 83451 00:10:17.354 [2024-10-15 01:11:30.061105] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:17.613 01:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.r9DEuqo3pn 00:10:17.613 01:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:17.613 01:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:17.613 01:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:17.613 01:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:17.613 01:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:17.613 01:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:17.613 ************************************ 00:10:17.613 END TEST raid_read_error_test 00:10:17.613 ************************************ 00:10:17.613 01:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:17.613 00:10:17.613 real 0m3.310s 00:10:17.613 user 0m4.196s 00:10:17.613 sys 0m0.539s 00:10:17.613 01:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:17.613 01:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.613 01:11:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:17.613 01:11:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:17.613 01:11:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:17.613 01:11:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:17.873 ************************************ 00:10:17.873 START TEST raid_write_error_test 00:10:17.873 ************************************ 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LVMxZZCQrM 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83580 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83580 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 83580 ']' 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:17.873 01:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.873 [2024-10-15 01:11:30.449717] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:10:17.873 [2024-10-15 01:11:30.449942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83580 ] 00:10:17.873 [2024-10-15 01:11:30.592819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.133 [2024-10-15 01:11:30.620412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.133 [2024-10-15 01:11:30.663829] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.133 [2024-10-15 01:11:30.663945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.702 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:18.702 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:18.702 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:18.702 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:18.702 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.703 BaseBdev1_malloc 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.703 true 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.703 [2024-10-15 01:11:31.323098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:18.703 [2024-10-15 01:11:31.323158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.703 [2024-10-15 01:11:31.323210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:18.703 [2024-10-15 01:11:31.323220] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.703 [2024-10-15 01:11:31.325545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.703 [2024-10-15 01:11:31.325584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:18.703 BaseBdev1 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.703 BaseBdev2_malloc 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.703 true 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.703 [2024-10-15 01:11:31.364088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:18.703 [2024-10-15 01:11:31.364140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.703 [2024-10-15 01:11:31.364176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:18.703 [2024-10-15 01:11:31.364212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.703 [2024-10-15 01:11:31.366513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.703 [2024-10-15 01:11:31.366589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:18.703 BaseBdev2 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.703 BaseBdev3_malloc 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.703 true 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.703 [2024-10-15 01:11:31.404927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:18.703 [2024-10-15 01:11:31.404980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.703 [2024-10-15 01:11:31.405003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:18.703 [2024-10-15 01:11:31.405012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.703 [2024-10-15 01:11:31.407198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.703 [2024-10-15 01:11:31.407232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:18.703 BaseBdev3 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.703 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.963 BaseBdev4_malloc 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.963 true 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.963 [2024-10-15 01:11:31.454856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:18.963 [2024-10-15 01:11:31.454946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.963 [2024-10-15 01:11:31.454972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:18.963 [2024-10-15 01:11:31.454982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.963 [2024-10-15 01:11:31.457307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.963 [2024-10-15 01:11:31.457379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:18.963 BaseBdev4 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.963 [2024-10-15 01:11:31.466893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.963 [2024-10-15 01:11:31.468875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.963 [2024-10-15 01:11:31.468950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:18.963 [2024-10-15 01:11:31.469015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:18.963 [2024-10-15 01:11:31.469227] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:18.963 [2024-10-15 01:11:31.469239] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:18.963 [2024-10-15 01:11:31.469500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:18.963 [2024-10-15 01:11:31.469632] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:18.963 [2024-10-15 01:11:31.469644] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:18.963 [2024-10-15 01:11:31.469766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.963 "name": "raid_bdev1", 00:10:18.963 "uuid": "f2693a05-14b1-4f8a-a4f9-cd750fd723b6", 00:10:18.963 "strip_size_kb": 64, 00:10:18.963 "state": "online", 00:10:18.963 "raid_level": "concat", 00:10:18.963 "superblock": true, 00:10:18.963 "num_base_bdevs": 4, 00:10:18.963 "num_base_bdevs_discovered": 4, 00:10:18.963 "num_base_bdevs_operational": 4, 00:10:18.963 "base_bdevs_list": [ 00:10:18.963 { 00:10:18.963 "name": "BaseBdev1", 00:10:18.963 "uuid": "1ed00d60-1bd4-5740-8036-0c282f4b1c5e", 00:10:18.963 "is_configured": true, 00:10:18.963 "data_offset": 2048, 00:10:18.963 "data_size": 63488 00:10:18.963 }, 00:10:18.963 { 00:10:18.963 "name": "BaseBdev2", 00:10:18.963 "uuid": "3fadd23d-bcf3-5f6c-8d51-d3a107d11c08", 00:10:18.963 "is_configured": true, 00:10:18.963 "data_offset": 2048, 00:10:18.963 "data_size": 63488 00:10:18.963 }, 00:10:18.963 { 00:10:18.963 "name": "BaseBdev3", 00:10:18.963 "uuid": "e1ba341f-8931-581d-b727-7d3d10878cc9", 00:10:18.963 "is_configured": true, 00:10:18.963 "data_offset": 2048, 00:10:18.963 "data_size": 63488 00:10:18.963 }, 00:10:18.963 { 00:10:18.963 "name": "BaseBdev4", 00:10:18.963 "uuid": "8e2e49af-c943-5ba1-9c85-660f0ab2e6f9", 00:10:18.963 "is_configured": true, 00:10:18.963 "data_offset": 2048, 00:10:18.963 "data_size": 63488 00:10:18.963 } 00:10:18.963 ] 00:10:18.963 }' 00:10:18.963 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.964 01:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.222 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:19.222 01:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:19.481 [2024-10-15 01:11:32.014385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.421 "name": "raid_bdev1", 00:10:20.421 "uuid": "f2693a05-14b1-4f8a-a4f9-cd750fd723b6", 00:10:20.421 "strip_size_kb": 64, 00:10:20.421 "state": "online", 00:10:20.421 "raid_level": "concat", 00:10:20.421 "superblock": true, 00:10:20.421 "num_base_bdevs": 4, 00:10:20.421 "num_base_bdevs_discovered": 4, 00:10:20.421 "num_base_bdevs_operational": 4, 00:10:20.421 "base_bdevs_list": [ 00:10:20.421 { 00:10:20.421 "name": "BaseBdev1", 00:10:20.421 "uuid": "1ed00d60-1bd4-5740-8036-0c282f4b1c5e", 00:10:20.421 "is_configured": true, 00:10:20.421 "data_offset": 2048, 00:10:20.421 "data_size": 63488 00:10:20.421 }, 00:10:20.421 { 00:10:20.421 "name": "BaseBdev2", 00:10:20.421 "uuid": "3fadd23d-bcf3-5f6c-8d51-d3a107d11c08", 00:10:20.421 "is_configured": true, 00:10:20.421 "data_offset": 2048, 00:10:20.421 "data_size": 63488 00:10:20.421 }, 00:10:20.421 { 00:10:20.421 "name": "BaseBdev3", 00:10:20.421 "uuid": "e1ba341f-8931-581d-b727-7d3d10878cc9", 00:10:20.421 "is_configured": true, 00:10:20.421 "data_offset": 2048, 00:10:20.421 "data_size": 63488 00:10:20.421 }, 00:10:20.421 { 00:10:20.421 "name": "BaseBdev4", 00:10:20.421 "uuid": "8e2e49af-c943-5ba1-9c85-660f0ab2e6f9", 00:10:20.421 "is_configured": true, 00:10:20.421 "data_offset": 2048, 00:10:20.421 "data_size": 63488 00:10:20.421 } 00:10:20.421 ] 00:10:20.421 }' 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.421 01:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.681 01:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:20.681 01:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.681 01:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.681 [2024-10-15 01:11:33.374661] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:20.681 [2024-10-15 01:11:33.374761] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.681 [2024-10-15 01:11:33.377574] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.681 [2024-10-15 01:11:33.377696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.681 [2024-10-15 01:11:33.377777] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.681 [2024-10-15 01:11:33.377833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:20.681 { 00:10:20.681 "results": [ 00:10:20.681 { 00:10:20.681 "job": "raid_bdev1", 00:10:20.681 "core_mask": "0x1", 00:10:20.681 "workload": "randrw", 00:10:20.681 "percentage": 50, 00:10:20.681 "status": "finished", 00:10:20.681 "queue_depth": 1, 00:10:20.681 "io_size": 131072, 00:10:20.681 "runtime": 1.360787, 00:10:20.681 "iops": 15861.409610762008, 00:10:20.681 "mibps": 1982.676201345251, 00:10:20.681 "io_failed": 1, 00:10:20.681 "io_timeout": 0, 00:10:20.681 "avg_latency_us": 87.3647411219784, 00:10:20.681 "min_latency_us": 26.717903930131005, 00:10:20.681 "max_latency_us": 1609.7816593886462 00:10:20.681 } 00:10:20.681 ], 00:10:20.681 "core_count": 1 00:10:20.681 } 00:10:20.681 01:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.681 01:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83580 00:10:20.681 01:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 83580 ']' 00:10:20.681 01:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 83580 00:10:20.681 01:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:20.681 01:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:20.681 01:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83580 00:10:20.940 01:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:20.940 01:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:20.940 01:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83580' 00:10:20.940 killing process with pid 83580 00:10:20.940 01:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 83580 00:10:20.940 [2024-10-15 01:11:33.425936] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:20.940 01:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 83580 00:10:20.940 [2024-10-15 01:11:33.462074] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:21.199 01:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LVMxZZCQrM 00:10:21.199 01:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:21.199 01:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:21.199 ************************************ 00:10:21.199 END TEST raid_write_error_test 00:10:21.199 ************************************ 00:10:21.199 01:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:21.200 01:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:21.200 01:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:21.200 01:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:21.200 01:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:21.200 00:10:21.200 real 0m3.331s 00:10:21.200 user 0m4.238s 00:10:21.200 sys 0m0.535s 00:10:21.200 01:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.200 01:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.200 01:11:33 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:21.200 01:11:33 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:10:21.200 01:11:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:21.200 01:11:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.200 01:11:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:21.200 ************************************ 00:10:21.200 START TEST raid_state_function_test 00:10:21.200 ************************************ 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83707 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83707' 00:10:21.200 Process raid pid: 83707 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83707 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 83707 ']' 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:21.200 01:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.200 [2024-10-15 01:11:33.838318] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:10:21.200 [2024-10-15 01:11:33.838536] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.459 [2024-10-15 01:11:33.981860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.459 [2024-10-15 01:11:34.009398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.459 [2024-10-15 01:11:34.052979] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.459 [2024-10-15 01:11:34.053097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.029 [2024-10-15 01:11:34.687620] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:22.029 [2024-10-15 01:11:34.687682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:22.029 [2024-10-15 01:11:34.687695] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:22.029 [2024-10-15 01:11:34.687706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:22.029 [2024-10-15 01:11:34.687712] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:22.029 [2024-10-15 01:11:34.687724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:22.029 [2024-10-15 01:11:34.687730] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:22.029 [2024-10-15 01:11:34.687739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.029 "name": "Existed_Raid", 00:10:22.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.029 "strip_size_kb": 0, 00:10:22.029 "state": "configuring", 00:10:22.029 "raid_level": "raid1", 00:10:22.029 "superblock": false, 00:10:22.029 "num_base_bdevs": 4, 00:10:22.029 "num_base_bdevs_discovered": 0, 00:10:22.029 "num_base_bdevs_operational": 4, 00:10:22.029 "base_bdevs_list": [ 00:10:22.029 { 00:10:22.029 "name": "BaseBdev1", 00:10:22.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.029 "is_configured": false, 00:10:22.029 "data_offset": 0, 00:10:22.029 "data_size": 0 00:10:22.029 }, 00:10:22.029 { 00:10:22.029 "name": "BaseBdev2", 00:10:22.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.029 "is_configured": false, 00:10:22.029 "data_offset": 0, 00:10:22.029 "data_size": 0 00:10:22.029 }, 00:10:22.029 { 00:10:22.029 "name": "BaseBdev3", 00:10:22.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.029 "is_configured": false, 00:10:22.029 "data_offset": 0, 00:10:22.029 "data_size": 0 00:10:22.029 }, 00:10:22.029 { 00:10:22.029 "name": "BaseBdev4", 00:10:22.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.029 "is_configured": false, 00:10:22.029 "data_offset": 0, 00:10:22.029 "data_size": 0 00:10:22.029 } 00:10:22.029 ] 00:10:22.029 }' 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.029 01:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.598 [2024-10-15 01:11:35.082868] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:22.598 [2024-10-15 01:11:35.082984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.598 [2024-10-15 01:11:35.090887] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:22.598 [2024-10-15 01:11:35.090975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:22.598 [2024-10-15 01:11:35.091011] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:22.598 [2024-10-15 01:11:35.091038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:22.598 [2024-10-15 01:11:35.091082] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:22.598 [2024-10-15 01:11:35.091108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:22.598 [2024-10-15 01:11:35.091135] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:22.598 [2024-10-15 01:11:35.091161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.598 [2024-10-15 01:11:35.108333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:22.598 BaseBdev1 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.598 [ 00:10:22.598 { 00:10:22.598 "name": "BaseBdev1", 00:10:22.598 "aliases": [ 00:10:22.598 "eb228512-6a5b-49ee-9619-510fd4b91b1f" 00:10:22.598 ], 00:10:22.598 "product_name": "Malloc disk", 00:10:22.598 "block_size": 512, 00:10:22.598 "num_blocks": 65536, 00:10:22.598 "uuid": "eb228512-6a5b-49ee-9619-510fd4b91b1f", 00:10:22.598 "assigned_rate_limits": { 00:10:22.598 "rw_ios_per_sec": 0, 00:10:22.598 "rw_mbytes_per_sec": 0, 00:10:22.598 "r_mbytes_per_sec": 0, 00:10:22.598 "w_mbytes_per_sec": 0 00:10:22.598 }, 00:10:22.598 "claimed": true, 00:10:22.598 "claim_type": "exclusive_write", 00:10:22.598 "zoned": false, 00:10:22.598 "supported_io_types": { 00:10:22.598 "read": true, 00:10:22.598 "write": true, 00:10:22.598 "unmap": true, 00:10:22.598 "flush": true, 00:10:22.598 "reset": true, 00:10:22.598 "nvme_admin": false, 00:10:22.598 "nvme_io": false, 00:10:22.598 "nvme_io_md": false, 00:10:22.598 "write_zeroes": true, 00:10:22.598 "zcopy": true, 00:10:22.598 "get_zone_info": false, 00:10:22.598 "zone_management": false, 00:10:22.598 "zone_append": false, 00:10:22.598 "compare": false, 00:10:22.598 "compare_and_write": false, 00:10:22.598 "abort": true, 00:10:22.598 "seek_hole": false, 00:10:22.598 "seek_data": false, 00:10:22.598 "copy": true, 00:10:22.598 "nvme_iov_md": false 00:10:22.598 }, 00:10:22.598 "memory_domains": [ 00:10:22.598 { 00:10:22.598 "dma_device_id": "system", 00:10:22.598 "dma_device_type": 1 00:10:22.598 }, 00:10:22.598 { 00:10:22.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.598 "dma_device_type": 2 00:10:22.598 } 00:10:22.598 ], 00:10:22.598 "driver_specific": {} 00:10:22.598 } 00:10:22.598 ] 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.598 "name": "Existed_Raid", 00:10:22.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.598 "strip_size_kb": 0, 00:10:22.598 "state": "configuring", 00:10:22.598 "raid_level": "raid1", 00:10:22.598 "superblock": false, 00:10:22.598 "num_base_bdevs": 4, 00:10:22.598 "num_base_bdevs_discovered": 1, 00:10:22.598 "num_base_bdevs_operational": 4, 00:10:22.598 "base_bdevs_list": [ 00:10:22.598 { 00:10:22.598 "name": "BaseBdev1", 00:10:22.598 "uuid": "eb228512-6a5b-49ee-9619-510fd4b91b1f", 00:10:22.598 "is_configured": true, 00:10:22.598 "data_offset": 0, 00:10:22.598 "data_size": 65536 00:10:22.598 }, 00:10:22.598 { 00:10:22.598 "name": "BaseBdev2", 00:10:22.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.598 "is_configured": false, 00:10:22.598 "data_offset": 0, 00:10:22.598 "data_size": 0 00:10:22.598 }, 00:10:22.598 { 00:10:22.598 "name": "BaseBdev3", 00:10:22.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.598 "is_configured": false, 00:10:22.598 "data_offset": 0, 00:10:22.598 "data_size": 0 00:10:22.598 }, 00:10:22.598 { 00:10:22.598 "name": "BaseBdev4", 00:10:22.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.598 "is_configured": false, 00:10:22.598 "data_offset": 0, 00:10:22.598 "data_size": 0 00:10:22.598 } 00:10:22.598 ] 00:10:22.598 }' 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.598 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.857 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:22.857 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.857 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.857 [2024-10-15 01:11:35.579622] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:23.117 [2024-10-15 01:11:35.579770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.117 [2024-10-15 01:11:35.591642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.117 [2024-10-15 01:11:35.593733] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:23.117 [2024-10-15 01:11:35.593810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:23.117 [2024-10-15 01:11:35.593839] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:23.117 [2024-10-15 01:11:35.593863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:23.117 [2024-10-15 01:11:35.593882] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:23.117 [2024-10-15 01:11:35.593902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.117 "name": "Existed_Raid", 00:10:23.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.117 "strip_size_kb": 0, 00:10:23.117 "state": "configuring", 00:10:23.117 "raid_level": "raid1", 00:10:23.117 "superblock": false, 00:10:23.117 "num_base_bdevs": 4, 00:10:23.117 "num_base_bdevs_discovered": 1, 00:10:23.117 "num_base_bdevs_operational": 4, 00:10:23.117 "base_bdevs_list": [ 00:10:23.117 { 00:10:23.117 "name": "BaseBdev1", 00:10:23.117 "uuid": "eb228512-6a5b-49ee-9619-510fd4b91b1f", 00:10:23.117 "is_configured": true, 00:10:23.117 "data_offset": 0, 00:10:23.117 "data_size": 65536 00:10:23.117 }, 00:10:23.117 { 00:10:23.117 "name": "BaseBdev2", 00:10:23.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.117 "is_configured": false, 00:10:23.117 "data_offset": 0, 00:10:23.117 "data_size": 0 00:10:23.117 }, 00:10:23.117 { 00:10:23.117 "name": "BaseBdev3", 00:10:23.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.117 "is_configured": false, 00:10:23.117 "data_offset": 0, 00:10:23.117 "data_size": 0 00:10:23.117 }, 00:10:23.117 { 00:10:23.117 "name": "BaseBdev4", 00:10:23.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.117 "is_configured": false, 00:10:23.117 "data_offset": 0, 00:10:23.117 "data_size": 0 00:10:23.117 } 00:10:23.117 ] 00:10:23.117 }' 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.117 01:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.377 [2024-10-15 01:11:36.057962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:23.377 BaseBdev2 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.377 [ 00:10:23.377 { 00:10:23.377 "name": "BaseBdev2", 00:10:23.377 "aliases": [ 00:10:23.377 "2505b938-2f69-4d0e-9561-62cc831529e1" 00:10:23.377 ], 00:10:23.377 "product_name": "Malloc disk", 00:10:23.377 "block_size": 512, 00:10:23.377 "num_blocks": 65536, 00:10:23.377 "uuid": "2505b938-2f69-4d0e-9561-62cc831529e1", 00:10:23.377 "assigned_rate_limits": { 00:10:23.377 "rw_ios_per_sec": 0, 00:10:23.377 "rw_mbytes_per_sec": 0, 00:10:23.377 "r_mbytes_per_sec": 0, 00:10:23.377 "w_mbytes_per_sec": 0 00:10:23.377 }, 00:10:23.377 "claimed": true, 00:10:23.377 "claim_type": "exclusive_write", 00:10:23.377 "zoned": false, 00:10:23.377 "supported_io_types": { 00:10:23.377 "read": true, 00:10:23.377 "write": true, 00:10:23.377 "unmap": true, 00:10:23.377 "flush": true, 00:10:23.377 "reset": true, 00:10:23.377 "nvme_admin": false, 00:10:23.377 "nvme_io": false, 00:10:23.377 "nvme_io_md": false, 00:10:23.377 "write_zeroes": true, 00:10:23.377 "zcopy": true, 00:10:23.377 "get_zone_info": false, 00:10:23.377 "zone_management": false, 00:10:23.377 "zone_append": false, 00:10:23.377 "compare": false, 00:10:23.377 "compare_and_write": false, 00:10:23.377 "abort": true, 00:10:23.377 "seek_hole": false, 00:10:23.377 "seek_data": false, 00:10:23.377 "copy": true, 00:10:23.377 "nvme_iov_md": false 00:10:23.377 }, 00:10:23.377 "memory_domains": [ 00:10:23.377 { 00:10:23.377 "dma_device_id": "system", 00:10:23.377 "dma_device_type": 1 00:10:23.377 }, 00:10:23.377 { 00:10:23.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.377 "dma_device_type": 2 00:10:23.377 } 00:10:23.377 ], 00:10:23.377 "driver_specific": {} 00:10:23.377 } 00:10:23.377 ] 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.377 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.637 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.637 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.637 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.637 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.637 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.637 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.637 "name": "Existed_Raid", 00:10:23.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.637 "strip_size_kb": 0, 00:10:23.637 "state": "configuring", 00:10:23.637 "raid_level": "raid1", 00:10:23.637 "superblock": false, 00:10:23.637 "num_base_bdevs": 4, 00:10:23.637 "num_base_bdevs_discovered": 2, 00:10:23.637 "num_base_bdevs_operational": 4, 00:10:23.637 "base_bdevs_list": [ 00:10:23.637 { 00:10:23.637 "name": "BaseBdev1", 00:10:23.637 "uuid": "eb228512-6a5b-49ee-9619-510fd4b91b1f", 00:10:23.637 "is_configured": true, 00:10:23.637 "data_offset": 0, 00:10:23.637 "data_size": 65536 00:10:23.637 }, 00:10:23.637 { 00:10:23.637 "name": "BaseBdev2", 00:10:23.637 "uuid": "2505b938-2f69-4d0e-9561-62cc831529e1", 00:10:23.637 "is_configured": true, 00:10:23.637 "data_offset": 0, 00:10:23.637 "data_size": 65536 00:10:23.637 }, 00:10:23.637 { 00:10:23.637 "name": "BaseBdev3", 00:10:23.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.637 "is_configured": false, 00:10:23.637 "data_offset": 0, 00:10:23.637 "data_size": 0 00:10:23.637 }, 00:10:23.637 { 00:10:23.637 "name": "BaseBdev4", 00:10:23.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.637 "is_configured": false, 00:10:23.637 "data_offset": 0, 00:10:23.637 "data_size": 0 00:10:23.637 } 00:10:23.637 ] 00:10:23.637 }' 00:10:23.637 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.637 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.897 [2024-10-15 01:11:36.496705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:23.897 BaseBdev3 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.897 [ 00:10:23.897 { 00:10:23.897 "name": "BaseBdev3", 00:10:23.897 "aliases": [ 00:10:23.897 "ea1d5c65-eb68-4226-8bef-8e26e0f562a2" 00:10:23.897 ], 00:10:23.897 "product_name": "Malloc disk", 00:10:23.897 "block_size": 512, 00:10:23.897 "num_blocks": 65536, 00:10:23.897 "uuid": "ea1d5c65-eb68-4226-8bef-8e26e0f562a2", 00:10:23.897 "assigned_rate_limits": { 00:10:23.897 "rw_ios_per_sec": 0, 00:10:23.897 "rw_mbytes_per_sec": 0, 00:10:23.897 "r_mbytes_per_sec": 0, 00:10:23.897 "w_mbytes_per_sec": 0 00:10:23.897 }, 00:10:23.897 "claimed": true, 00:10:23.897 "claim_type": "exclusive_write", 00:10:23.897 "zoned": false, 00:10:23.897 "supported_io_types": { 00:10:23.897 "read": true, 00:10:23.897 "write": true, 00:10:23.897 "unmap": true, 00:10:23.897 "flush": true, 00:10:23.897 "reset": true, 00:10:23.897 "nvme_admin": false, 00:10:23.897 "nvme_io": false, 00:10:23.897 "nvme_io_md": false, 00:10:23.897 "write_zeroes": true, 00:10:23.897 "zcopy": true, 00:10:23.897 "get_zone_info": false, 00:10:23.897 "zone_management": false, 00:10:23.897 "zone_append": false, 00:10:23.897 "compare": false, 00:10:23.897 "compare_and_write": false, 00:10:23.897 "abort": true, 00:10:23.897 "seek_hole": false, 00:10:23.897 "seek_data": false, 00:10:23.897 "copy": true, 00:10:23.897 "nvme_iov_md": false 00:10:23.897 }, 00:10:23.897 "memory_domains": [ 00:10:23.897 { 00:10:23.897 "dma_device_id": "system", 00:10:23.897 "dma_device_type": 1 00:10:23.897 }, 00:10:23.897 { 00:10:23.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.897 "dma_device_type": 2 00:10:23.897 } 00:10:23.897 ], 00:10:23.897 "driver_specific": {} 00:10:23.897 } 00:10:23.897 ] 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.897 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.898 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.898 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.898 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.898 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.898 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.898 "name": "Existed_Raid", 00:10:23.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.898 "strip_size_kb": 0, 00:10:23.898 "state": "configuring", 00:10:23.898 "raid_level": "raid1", 00:10:23.898 "superblock": false, 00:10:23.898 "num_base_bdevs": 4, 00:10:23.898 "num_base_bdevs_discovered": 3, 00:10:23.898 "num_base_bdevs_operational": 4, 00:10:23.898 "base_bdevs_list": [ 00:10:23.898 { 00:10:23.898 "name": "BaseBdev1", 00:10:23.898 "uuid": "eb228512-6a5b-49ee-9619-510fd4b91b1f", 00:10:23.898 "is_configured": true, 00:10:23.898 "data_offset": 0, 00:10:23.898 "data_size": 65536 00:10:23.898 }, 00:10:23.898 { 00:10:23.898 "name": "BaseBdev2", 00:10:23.898 "uuid": "2505b938-2f69-4d0e-9561-62cc831529e1", 00:10:23.898 "is_configured": true, 00:10:23.898 "data_offset": 0, 00:10:23.898 "data_size": 65536 00:10:23.898 }, 00:10:23.898 { 00:10:23.898 "name": "BaseBdev3", 00:10:23.898 "uuid": "ea1d5c65-eb68-4226-8bef-8e26e0f562a2", 00:10:23.898 "is_configured": true, 00:10:23.898 "data_offset": 0, 00:10:23.898 "data_size": 65536 00:10:23.898 }, 00:10:23.898 { 00:10:23.898 "name": "BaseBdev4", 00:10:23.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.898 "is_configured": false, 00:10:23.898 "data_offset": 0, 00:10:23.898 "data_size": 0 00:10:23.898 } 00:10:23.898 ] 00:10:23.898 }' 00:10:23.898 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.898 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.467 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:24.467 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.467 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.467 [2024-10-15 01:11:36.967164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:24.467 [2024-10-15 01:11:36.967249] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:24.467 [2024-10-15 01:11:36.967258] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:24.467 [2024-10-15 01:11:36.967588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:24.467 [2024-10-15 01:11:36.967756] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:24.467 [2024-10-15 01:11:36.967771] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:24.467 [2024-10-15 01:11:36.967988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.467 BaseBdev4 00:10:24.467 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.467 01:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:24.467 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:24.467 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:24.467 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:24.467 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:24.467 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:24.467 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:24.467 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.467 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.467 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.467 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:24.467 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.467 01:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.467 [ 00:10:24.467 { 00:10:24.467 "name": "BaseBdev4", 00:10:24.467 "aliases": [ 00:10:24.467 "00b55764-7feb-4b61-9dd6-ddc83ac38ef0" 00:10:24.467 ], 00:10:24.467 "product_name": "Malloc disk", 00:10:24.467 "block_size": 512, 00:10:24.467 "num_blocks": 65536, 00:10:24.467 "uuid": "00b55764-7feb-4b61-9dd6-ddc83ac38ef0", 00:10:24.467 "assigned_rate_limits": { 00:10:24.467 "rw_ios_per_sec": 0, 00:10:24.467 "rw_mbytes_per_sec": 0, 00:10:24.467 "r_mbytes_per_sec": 0, 00:10:24.467 "w_mbytes_per_sec": 0 00:10:24.467 }, 00:10:24.467 "claimed": true, 00:10:24.467 "claim_type": "exclusive_write", 00:10:24.467 "zoned": false, 00:10:24.467 "supported_io_types": { 00:10:24.467 "read": true, 00:10:24.467 "write": true, 00:10:24.467 "unmap": true, 00:10:24.467 "flush": true, 00:10:24.467 "reset": true, 00:10:24.467 "nvme_admin": false, 00:10:24.467 "nvme_io": false, 00:10:24.467 "nvme_io_md": false, 00:10:24.467 "write_zeroes": true, 00:10:24.467 "zcopy": true, 00:10:24.467 "get_zone_info": false, 00:10:24.467 "zone_management": false, 00:10:24.467 "zone_append": false, 00:10:24.467 "compare": false, 00:10:24.467 "compare_and_write": false, 00:10:24.467 "abort": true, 00:10:24.467 "seek_hole": false, 00:10:24.467 "seek_data": false, 00:10:24.467 "copy": true, 00:10:24.467 "nvme_iov_md": false 00:10:24.467 }, 00:10:24.467 "memory_domains": [ 00:10:24.467 { 00:10:24.467 "dma_device_id": "system", 00:10:24.467 "dma_device_type": 1 00:10:24.467 }, 00:10:24.467 { 00:10:24.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.467 "dma_device_type": 2 00:10:24.467 } 00:10:24.467 ], 00:10:24.467 "driver_specific": {} 00:10:24.467 } 00:10:24.467 ] 00:10:24.467 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.467 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:24.467 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:24.467 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:24.467 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:24.467 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.467 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.467 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.467 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.467 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.467 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.467 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.467 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.467 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.467 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.467 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.467 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.467 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.467 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.467 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.467 "name": "Existed_Raid", 00:10:24.467 "uuid": "e4dc3ab0-e2ed-4c82-80d5-10c03fdfdf5a", 00:10:24.467 "strip_size_kb": 0, 00:10:24.467 "state": "online", 00:10:24.467 "raid_level": "raid1", 00:10:24.467 "superblock": false, 00:10:24.467 "num_base_bdevs": 4, 00:10:24.467 "num_base_bdevs_discovered": 4, 00:10:24.467 "num_base_bdevs_operational": 4, 00:10:24.467 "base_bdevs_list": [ 00:10:24.467 { 00:10:24.467 "name": "BaseBdev1", 00:10:24.467 "uuid": "eb228512-6a5b-49ee-9619-510fd4b91b1f", 00:10:24.467 "is_configured": true, 00:10:24.467 "data_offset": 0, 00:10:24.467 "data_size": 65536 00:10:24.467 }, 00:10:24.467 { 00:10:24.468 "name": "BaseBdev2", 00:10:24.468 "uuid": "2505b938-2f69-4d0e-9561-62cc831529e1", 00:10:24.468 "is_configured": true, 00:10:24.468 "data_offset": 0, 00:10:24.468 "data_size": 65536 00:10:24.468 }, 00:10:24.468 { 00:10:24.468 "name": "BaseBdev3", 00:10:24.468 "uuid": "ea1d5c65-eb68-4226-8bef-8e26e0f562a2", 00:10:24.468 "is_configured": true, 00:10:24.468 "data_offset": 0, 00:10:24.468 "data_size": 65536 00:10:24.468 }, 00:10:24.468 { 00:10:24.468 "name": "BaseBdev4", 00:10:24.468 "uuid": "00b55764-7feb-4b61-9dd6-ddc83ac38ef0", 00:10:24.468 "is_configured": true, 00:10:24.468 "data_offset": 0, 00:10:24.468 "data_size": 65536 00:10:24.468 } 00:10:24.468 ] 00:10:24.468 }' 00:10:24.468 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.468 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.037 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:25.037 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:25.037 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:25.037 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:25.037 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:25.037 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:25.037 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:25.037 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:25.037 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.037 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.037 [2024-10-15 01:11:37.466715] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.037 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.037 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:25.037 "name": "Existed_Raid", 00:10:25.037 "aliases": [ 00:10:25.037 "e4dc3ab0-e2ed-4c82-80d5-10c03fdfdf5a" 00:10:25.037 ], 00:10:25.037 "product_name": "Raid Volume", 00:10:25.037 "block_size": 512, 00:10:25.037 "num_blocks": 65536, 00:10:25.037 "uuid": "e4dc3ab0-e2ed-4c82-80d5-10c03fdfdf5a", 00:10:25.037 "assigned_rate_limits": { 00:10:25.037 "rw_ios_per_sec": 0, 00:10:25.037 "rw_mbytes_per_sec": 0, 00:10:25.037 "r_mbytes_per_sec": 0, 00:10:25.037 "w_mbytes_per_sec": 0 00:10:25.037 }, 00:10:25.037 "claimed": false, 00:10:25.037 "zoned": false, 00:10:25.037 "supported_io_types": { 00:10:25.037 "read": true, 00:10:25.037 "write": true, 00:10:25.037 "unmap": false, 00:10:25.037 "flush": false, 00:10:25.037 "reset": true, 00:10:25.037 "nvme_admin": false, 00:10:25.037 "nvme_io": false, 00:10:25.037 "nvme_io_md": false, 00:10:25.037 "write_zeroes": true, 00:10:25.037 "zcopy": false, 00:10:25.037 "get_zone_info": false, 00:10:25.037 "zone_management": false, 00:10:25.037 "zone_append": false, 00:10:25.037 "compare": false, 00:10:25.037 "compare_and_write": false, 00:10:25.037 "abort": false, 00:10:25.037 "seek_hole": false, 00:10:25.037 "seek_data": false, 00:10:25.037 "copy": false, 00:10:25.037 "nvme_iov_md": false 00:10:25.037 }, 00:10:25.037 "memory_domains": [ 00:10:25.037 { 00:10:25.037 "dma_device_id": "system", 00:10:25.037 "dma_device_type": 1 00:10:25.037 }, 00:10:25.037 { 00:10:25.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.037 "dma_device_type": 2 00:10:25.037 }, 00:10:25.037 { 00:10:25.037 "dma_device_id": "system", 00:10:25.037 "dma_device_type": 1 00:10:25.037 }, 00:10:25.037 { 00:10:25.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.037 "dma_device_type": 2 00:10:25.037 }, 00:10:25.037 { 00:10:25.037 "dma_device_id": "system", 00:10:25.037 "dma_device_type": 1 00:10:25.037 }, 00:10:25.037 { 00:10:25.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.037 "dma_device_type": 2 00:10:25.037 }, 00:10:25.037 { 00:10:25.037 "dma_device_id": "system", 00:10:25.037 "dma_device_type": 1 00:10:25.037 }, 00:10:25.037 { 00:10:25.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.037 "dma_device_type": 2 00:10:25.037 } 00:10:25.037 ], 00:10:25.037 "driver_specific": { 00:10:25.037 "raid": { 00:10:25.037 "uuid": "e4dc3ab0-e2ed-4c82-80d5-10c03fdfdf5a", 00:10:25.037 "strip_size_kb": 0, 00:10:25.037 "state": "online", 00:10:25.037 "raid_level": "raid1", 00:10:25.037 "superblock": false, 00:10:25.037 "num_base_bdevs": 4, 00:10:25.037 "num_base_bdevs_discovered": 4, 00:10:25.037 "num_base_bdevs_operational": 4, 00:10:25.037 "base_bdevs_list": [ 00:10:25.037 { 00:10:25.037 "name": "BaseBdev1", 00:10:25.037 "uuid": "eb228512-6a5b-49ee-9619-510fd4b91b1f", 00:10:25.037 "is_configured": true, 00:10:25.037 "data_offset": 0, 00:10:25.037 "data_size": 65536 00:10:25.037 }, 00:10:25.037 { 00:10:25.037 "name": "BaseBdev2", 00:10:25.037 "uuid": "2505b938-2f69-4d0e-9561-62cc831529e1", 00:10:25.037 "is_configured": true, 00:10:25.037 "data_offset": 0, 00:10:25.037 "data_size": 65536 00:10:25.037 }, 00:10:25.037 { 00:10:25.037 "name": "BaseBdev3", 00:10:25.037 "uuid": "ea1d5c65-eb68-4226-8bef-8e26e0f562a2", 00:10:25.037 "is_configured": true, 00:10:25.037 "data_offset": 0, 00:10:25.037 "data_size": 65536 00:10:25.037 }, 00:10:25.037 { 00:10:25.037 "name": "BaseBdev4", 00:10:25.037 "uuid": "00b55764-7feb-4b61-9dd6-ddc83ac38ef0", 00:10:25.037 "is_configured": true, 00:10:25.037 "data_offset": 0, 00:10:25.037 "data_size": 65536 00:10:25.037 } 00:10:25.037 ] 00:10:25.037 } 00:10:25.037 } 00:10:25.037 }' 00:10:25.037 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.037 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:25.037 BaseBdev2 00:10:25.037 BaseBdev3 00:10:25.037 BaseBdev4' 00:10:25.037 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.037 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.037 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.038 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.297 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.297 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.297 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:25.297 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.297 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.297 [2024-10-15 01:11:37.765918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:25.297 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.298 "name": "Existed_Raid", 00:10:25.298 "uuid": "e4dc3ab0-e2ed-4c82-80d5-10c03fdfdf5a", 00:10:25.298 "strip_size_kb": 0, 00:10:25.298 "state": "online", 00:10:25.298 "raid_level": "raid1", 00:10:25.298 "superblock": false, 00:10:25.298 "num_base_bdevs": 4, 00:10:25.298 "num_base_bdevs_discovered": 3, 00:10:25.298 "num_base_bdevs_operational": 3, 00:10:25.298 "base_bdevs_list": [ 00:10:25.298 { 00:10:25.298 "name": null, 00:10:25.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.298 "is_configured": false, 00:10:25.298 "data_offset": 0, 00:10:25.298 "data_size": 65536 00:10:25.298 }, 00:10:25.298 { 00:10:25.298 "name": "BaseBdev2", 00:10:25.298 "uuid": "2505b938-2f69-4d0e-9561-62cc831529e1", 00:10:25.298 "is_configured": true, 00:10:25.298 "data_offset": 0, 00:10:25.298 "data_size": 65536 00:10:25.298 }, 00:10:25.298 { 00:10:25.298 "name": "BaseBdev3", 00:10:25.298 "uuid": "ea1d5c65-eb68-4226-8bef-8e26e0f562a2", 00:10:25.298 "is_configured": true, 00:10:25.298 "data_offset": 0, 00:10:25.298 "data_size": 65536 00:10:25.298 }, 00:10:25.298 { 00:10:25.298 "name": "BaseBdev4", 00:10:25.298 "uuid": "00b55764-7feb-4b61-9dd6-ddc83ac38ef0", 00:10:25.298 "is_configured": true, 00:10:25.298 "data_offset": 0, 00:10:25.298 "data_size": 65536 00:10:25.298 } 00:10:25.298 ] 00:10:25.298 }' 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.298 01:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.557 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:25.557 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:25.557 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:25.557 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.557 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.557 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.557 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.557 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:25.557 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:25.557 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:25.557 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.557 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.557 [2024-10-15 01:11:38.260734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:25.557 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.557 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:25.557 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:25.557 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.557 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:25.557 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.557 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.817 [2024-10-15 01:11:38.332044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.817 [2024-10-15 01:11:38.399367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:25.817 [2024-10-15 01:11:38.399510] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.817 [2024-10-15 01:11:38.411273] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.817 [2024-10-15 01:11:38.411417] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.817 [2024-10-15 01:11:38.411479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.817 BaseBdev2 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.817 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:25.818 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.818 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.818 [ 00:10:25.818 { 00:10:25.818 "name": "BaseBdev2", 00:10:25.818 "aliases": [ 00:10:25.818 "3b6e3459-152a-4c9d-b6e2-34f27e9748ac" 00:10:25.818 ], 00:10:25.818 "product_name": "Malloc disk", 00:10:25.818 "block_size": 512, 00:10:25.818 "num_blocks": 65536, 00:10:25.818 "uuid": "3b6e3459-152a-4c9d-b6e2-34f27e9748ac", 00:10:25.818 "assigned_rate_limits": { 00:10:25.818 "rw_ios_per_sec": 0, 00:10:25.818 "rw_mbytes_per_sec": 0, 00:10:25.818 "r_mbytes_per_sec": 0, 00:10:25.818 "w_mbytes_per_sec": 0 00:10:25.818 }, 00:10:25.818 "claimed": false, 00:10:25.818 "zoned": false, 00:10:25.818 "supported_io_types": { 00:10:25.818 "read": true, 00:10:25.818 "write": true, 00:10:25.818 "unmap": true, 00:10:25.818 "flush": true, 00:10:25.818 "reset": true, 00:10:25.818 "nvme_admin": false, 00:10:25.818 "nvme_io": false, 00:10:25.818 "nvme_io_md": false, 00:10:25.818 "write_zeroes": true, 00:10:25.818 "zcopy": true, 00:10:25.818 "get_zone_info": false, 00:10:25.818 "zone_management": false, 00:10:25.818 "zone_append": false, 00:10:25.818 "compare": false, 00:10:25.818 "compare_and_write": false, 00:10:25.818 "abort": true, 00:10:25.818 "seek_hole": false, 00:10:25.818 "seek_data": false, 00:10:25.818 "copy": true, 00:10:25.818 "nvme_iov_md": false 00:10:25.818 }, 00:10:25.818 "memory_domains": [ 00:10:25.818 { 00:10:25.818 "dma_device_id": "system", 00:10:25.818 "dma_device_type": 1 00:10:25.818 }, 00:10:25.818 { 00:10:25.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.818 "dma_device_type": 2 00:10:25.818 } 00:10:25.818 ], 00:10:25.818 "driver_specific": {} 00:10:25.818 } 00:10:25.818 ] 00:10:25.818 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.818 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:25.818 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:25.818 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:25.818 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:25.818 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.818 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.818 BaseBdev3 00:10:25.818 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.818 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:25.818 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:25.818 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:25.818 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:25.818 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:25.818 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:25.818 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:25.818 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.818 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.078 [ 00:10:26.078 { 00:10:26.078 "name": "BaseBdev3", 00:10:26.078 "aliases": [ 00:10:26.078 "213ff000-247e-4eb0-9ca6-c5d7ba170157" 00:10:26.078 ], 00:10:26.078 "product_name": "Malloc disk", 00:10:26.078 "block_size": 512, 00:10:26.078 "num_blocks": 65536, 00:10:26.078 "uuid": "213ff000-247e-4eb0-9ca6-c5d7ba170157", 00:10:26.078 "assigned_rate_limits": { 00:10:26.078 "rw_ios_per_sec": 0, 00:10:26.078 "rw_mbytes_per_sec": 0, 00:10:26.078 "r_mbytes_per_sec": 0, 00:10:26.078 "w_mbytes_per_sec": 0 00:10:26.078 }, 00:10:26.078 "claimed": false, 00:10:26.078 "zoned": false, 00:10:26.078 "supported_io_types": { 00:10:26.078 "read": true, 00:10:26.078 "write": true, 00:10:26.078 "unmap": true, 00:10:26.078 "flush": true, 00:10:26.078 "reset": true, 00:10:26.078 "nvme_admin": false, 00:10:26.078 "nvme_io": false, 00:10:26.078 "nvme_io_md": false, 00:10:26.078 "write_zeroes": true, 00:10:26.078 "zcopy": true, 00:10:26.078 "get_zone_info": false, 00:10:26.078 "zone_management": false, 00:10:26.078 "zone_append": false, 00:10:26.078 "compare": false, 00:10:26.078 "compare_and_write": false, 00:10:26.078 "abort": true, 00:10:26.078 "seek_hole": false, 00:10:26.078 "seek_data": false, 00:10:26.078 "copy": true, 00:10:26.078 "nvme_iov_md": false 00:10:26.078 }, 00:10:26.078 "memory_domains": [ 00:10:26.078 { 00:10:26.078 "dma_device_id": "system", 00:10:26.078 "dma_device_type": 1 00:10:26.078 }, 00:10:26.078 { 00:10:26.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.078 "dma_device_type": 2 00:10:26.078 } 00:10:26.078 ], 00:10:26.078 "driver_specific": {} 00:10:26.078 } 00:10:26.078 ] 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.078 BaseBdev4 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.078 [ 00:10:26.078 { 00:10:26.078 "name": "BaseBdev4", 00:10:26.078 "aliases": [ 00:10:26.078 "a2b9a09c-663f-4bba-ab03-f80d9ed8a4c1" 00:10:26.078 ], 00:10:26.078 "product_name": "Malloc disk", 00:10:26.078 "block_size": 512, 00:10:26.078 "num_blocks": 65536, 00:10:26.078 "uuid": "a2b9a09c-663f-4bba-ab03-f80d9ed8a4c1", 00:10:26.078 "assigned_rate_limits": { 00:10:26.078 "rw_ios_per_sec": 0, 00:10:26.078 "rw_mbytes_per_sec": 0, 00:10:26.078 "r_mbytes_per_sec": 0, 00:10:26.078 "w_mbytes_per_sec": 0 00:10:26.078 }, 00:10:26.078 "claimed": false, 00:10:26.078 "zoned": false, 00:10:26.078 "supported_io_types": { 00:10:26.078 "read": true, 00:10:26.078 "write": true, 00:10:26.078 "unmap": true, 00:10:26.078 "flush": true, 00:10:26.078 "reset": true, 00:10:26.078 "nvme_admin": false, 00:10:26.078 "nvme_io": false, 00:10:26.078 "nvme_io_md": false, 00:10:26.078 "write_zeroes": true, 00:10:26.078 "zcopy": true, 00:10:26.078 "get_zone_info": false, 00:10:26.078 "zone_management": false, 00:10:26.078 "zone_append": false, 00:10:26.078 "compare": false, 00:10:26.078 "compare_and_write": false, 00:10:26.078 "abort": true, 00:10:26.078 "seek_hole": false, 00:10:26.078 "seek_data": false, 00:10:26.078 "copy": true, 00:10:26.078 "nvme_iov_md": false 00:10:26.078 }, 00:10:26.078 "memory_domains": [ 00:10:26.078 { 00:10:26.078 "dma_device_id": "system", 00:10:26.078 "dma_device_type": 1 00:10:26.078 }, 00:10:26.078 { 00:10:26.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.078 "dma_device_type": 2 00:10:26.078 } 00:10:26.078 ], 00:10:26.078 "driver_specific": {} 00:10:26.078 } 00:10:26.078 ] 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.078 [2024-10-15 01:11:38.633925] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.078 [2024-10-15 01:11:38.634030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.078 [2024-10-15 01:11:38.634072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.078 [2024-10-15 01:11:38.635970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.078 [2024-10-15 01:11:38.636060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:26.078 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.079 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:26.079 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.079 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.079 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.079 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.079 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.079 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.079 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.079 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.079 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.079 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.079 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.079 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.079 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.079 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.079 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.079 "name": "Existed_Raid", 00:10:26.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.079 "strip_size_kb": 0, 00:10:26.079 "state": "configuring", 00:10:26.079 "raid_level": "raid1", 00:10:26.079 "superblock": false, 00:10:26.079 "num_base_bdevs": 4, 00:10:26.079 "num_base_bdevs_discovered": 3, 00:10:26.079 "num_base_bdevs_operational": 4, 00:10:26.079 "base_bdevs_list": [ 00:10:26.079 { 00:10:26.079 "name": "BaseBdev1", 00:10:26.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.079 "is_configured": false, 00:10:26.079 "data_offset": 0, 00:10:26.079 "data_size": 0 00:10:26.079 }, 00:10:26.079 { 00:10:26.079 "name": "BaseBdev2", 00:10:26.079 "uuid": "3b6e3459-152a-4c9d-b6e2-34f27e9748ac", 00:10:26.079 "is_configured": true, 00:10:26.079 "data_offset": 0, 00:10:26.079 "data_size": 65536 00:10:26.079 }, 00:10:26.079 { 00:10:26.079 "name": "BaseBdev3", 00:10:26.079 "uuid": "213ff000-247e-4eb0-9ca6-c5d7ba170157", 00:10:26.079 "is_configured": true, 00:10:26.079 "data_offset": 0, 00:10:26.079 "data_size": 65536 00:10:26.079 }, 00:10:26.079 { 00:10:26.079 "name": "BaseBdev4", 00:10:26.079 "uuid": "a2b9a09c-663f-4bba-ab03-f80d9ed8a4c1", 00:10:26.079 "is_configured": true, 00:10:26.079 "data_offset": 0, 00:10:26.079 "data_size": 65536 00:10:26.079 } 00:10:26.079 ] 00:10:26.079 }' 00:10:26.079 01:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.079 01:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.647 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:26.647 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.647 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.647 [2024-10-15 01:11:39.129110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:26.647 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.647 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:26.647 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.647 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.647 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.647 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.647 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.647 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.647 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.647 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.647 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.647 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.647 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.647 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.648 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.648 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.648 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.648 "name": "Existed_Raid", 00:10:26.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.648 "strip_size_kb": 0, 00:10:26.648 "state": "configuring", 00:10:26.648 "raid_level": "raid1", 00:10:26.648 "superblock": false, 00:10:26.648 "num_base_bdevs": 4, 00:10:26.648 "num_base_bdevs_discovered": 2, 00:10:26.648 "num_base_bdevs_operational": 4, 00:10:26.648 "base_bdevs_list": [ 00:10:26.648 { 00:10:26.648 "name": "BaseBdev1", 00:10:26.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.648 "is_configured": false, 00:10:26.648 "data_offset": 0, 00:10:26.648 "data_size": 0 00:10:26.648 }, 00:10:26.648 { 00:10:26.648 "name": null, 00:10:26.648 "uuid": "3b6e3459-152a-4c9d-b6e2-34f27e9748ac", 00:10:26.648 "is_configured": false, 00:10:26.648 "data_offset": 0, 00:10:26.648 "data_size": 65536 00:10:26.648 }, 00:10:26.648 { 00:10:26.648 "name": "BaseBdev3", 00:10:26.648 "uuid": "213ff000-247e-4eb0-9ca6-c5d7ba170157", 00:10:26.648 "is_configured": true, 00:10:26.648 "data_offset": 0, 00:10:26.648 "data_size": 65536 00:10:26.648 }, 00:10:26.648 { 00:10:26.648 "name": "BaseBdev4", 00:10:26.648 "uuid": "a2b9a09c-663f-4bba-ab03-f80d9ed8a4c1", 00:10:26.648 "is_configured": true, 00:10:26.648 "data_offset": 0, 00:10:26.648 "data_size": 65536 00:10:26.648 } 00:10:26.648 ] 00:10:26.648 }' 00:10:26.648 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.648 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.907 BaseBdev1 00:10:26.907 [2024-10-15 01:11:39.599557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.907 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.907 [ 00:10:26.907 { 00:10:26.907 "name": "BaseBdev1", 00:10:26.907 "aliases": [ 00:10:26.907 "b4af32ee-7dba-4c68-930d-0410b019d6e8" 00:10:26.907 ], 00:10:26.907 "product_name": "Malloc disk", 00:10:26.907 "block_size": 512, 00:10:26.907 "num_blocks": 65536, 00:10:26.907 "uuid": "b4af32ee-7dba-4c68-930d-0410b019d6e8", 00:10:26.907 "assigned_rate_limits": { 00:10:26.907 "rw_ios_per_sec": 0, 00:10:26.907 "rw_mbytes_per_sec": 0, 00:10:26.907 "r_mbytes_per_sec": 0, 00:10:26.907 "w_mbytes_per_sec": 0 00:10:26.907 }, 00:10:26.907 "claimed": true, 00:10:26.907 "claim_type": "exclusive_write", 00:10:26.907 "zoned": false, 00:10:26.907 "supported_io_types": { 00:10:26.907 "read": true, 00:10:26.907 "write": true, 00:10:26.907 "unmap": true, 00:10:26.907 "flush": true, 00:10:26.907 "reset": true, 00:10:26.907 "nvme_admin": false, 00:10:27.167 "nvme_io": false, 00:10:27.167 "nvme_io_md": false, 00:10:27.167 "write_zeroes": true, 00:10:27.167 "zcopy": true, 00:10:27.167 "get_zone_info": false, 00:10:27.167 "zone_management": false, 00:10:27.167 "zone_append": false, 00:10:27.167 "compare": false, 00:10:27.167 "compare_and_write": false, 00:10:27.167 "abort": true, 00:10:27.167 "seek_hole": false, 00:10:27.167 "seek_data": false, 00:10:27.167 "copy": true, 00:10:27.167 "nvme_iov_md": false 00:10:27.167 }, 00:10:27.167 "memory_domains": [ 00:10:27.167 { 00:10:27.167 "dma_device_id": "system", 00:10:27.167 "dma_device_type": 1 00:10:27.167 }, 00:10:27.167 { 00:10:27.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.167 "dma_device_type": 2 00:10:27.167 } 00:10:27.167 ], 00:10:27.167 "driver_specific": {} 00:10:27.167 } 00:10:27.167 ] 00:10:27.167 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.167 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:27.167 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:27.167 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.167 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.167 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.167 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.167 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.167 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.167 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.167 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.167 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.167 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.167 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.167 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.167 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.167 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.167 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.167 "name": "Existed_Raid", 00:10:27.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.167 "strip_size_kb": 0, 00:10:27.167 "state": "configuring", 00:10:27.167 "raid_level": "raid1", 00:10:27.167 "superblock": false, 00:10:27.167 "num_base_bdevs": 4, 00:10:27.167 "num_base_bdevs_discovered": 3, 00:10:27.167 "num_base_bdevs_operational": 4, 00:10:27.167 "base_bdevs_list": [ 00:10:27.167 { 00:10:27.167 "name": "BaseBdev1", 00:10:27.167 "uuid": "b4af32ee-7dba-4c68-930d-0410b019d6e8", 00:10:27.167 "is_configured": true, 00:10:27.167 "data_offset": 0, 00:10:27.167 "data_size": 65536 00:10:27.167 }, 00:10:27.167 { 00:10:27.167 "name": null, 00:10:27.167 "uuid": "3b6e3459-152a-4c9d-b6e2-34f27e9748ac", 00:10:27.167 "is_configured": false, 00:10:27.167 "data_offset": 0, 00:10:27.167 "data_size": 65536 00:10:27.167 }, 00:10:27.167 { 00:10:27.167 "name": "BaseBdev3", 00:10:27.167 "uuid": "213ff000-247e-4eb0-9ca6-c5d7ba170157", 00:10:27.167 "is_configured": true, 00:10:27.167 "data_offset": 0, 00:10:27.167 "data_size": 65536 00:10:27.167 }, 00:10:27.167 { 00:10:27.167 "name": "BaseBdev4", 00:10:27.167 "uuid": "a2b9a09c-663f-4bba-ab03-f80d9ed8a4c1", 00:10:27.167 "is_configured": true, 00:10:27.167 "data_offset": 0, 00:10:27.167 "data_size": 65536 00:10:27.167 } 00:10:27.167 ] 00:10:27.167 }' 00:10:27.167 01:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.167 01:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.426 [2024-10-15 01:11:40.110797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.426 01:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.685 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.685 "name": "Existed_Raid", 00:10:27.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.685 "strip_size_kb": 0, 00:10:27.685 "state": "configuring", 00:10:27.685 "raid_level": "raid1", 00:10:27.685 "superblock": false, 00:10:27.685 "num_base_bdevs": 4, 00:10:27.685 "num_base_bdevs_discovered": 2, 00:10:27.685 "num_base_bdevs_operational": 4, 00:10:27.685 "base_bdevs_list": [ 00:10:27.685 { 00:10:27.685 "name": "BaseBdev1", 00:10:27.685 "uuid": "b4af32ee-7dba-4c68-930d-0410b019d6e8", 00:10:27.685 "is_configured": true, 00:10:27.685 "data_offset": 0, 00:10:27.685 "data_size": 65536 00:10:27.685 }, 00:10:27.685 { 00:10:27.685 "name": null, 00:10:27.685 "uuid": "3b6e3459-152a-4c9d-b6e2-34f27e9748ac", 00:10:27.685 "is_configured": false, 00:10:27.685 "data_offset": 0, 00:10:27.685 "data_size": 65536 00:10:27.685 }, 00:10:27.685 { 00:10:27.685 "name": null, 00:10:27.685 "uuid": "213ff000-247e-4eb0-9ca6-c5d7ba170157", 00:10:27.685 "is_configured": false, 00:10:27.685 "data_offset": 0, 00:10:27.685 "data_size": 65536 00:10:27.685 }, 00:10:27.685 { 00:10:27.685 "name": "BaseBdev4", 00:10:27.685 "uuid": "a2b9a09c-663f-4bba-ab03-f80d9ed8a4c1", 00:10:27.685 "is_configured": true, 00:10:27.685 "data_offset": 0, 00:10:27.685 "data_size": 65536 00:10:27.685 } 00:10:27.685 ] 00:10:27.685 }' 00:10:27.685 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.685 01:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.946 [2024-10-15 01:11:40.562047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.946 "name": "Existed_Raid", 00:10:27.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.946 "strip_size_kb": 0, 00:10:27.946 "state": "configuring", 00:10:27.946 "raid_level": "raid1", 00:10:27.946 "superblock": false, 00:10:27.946 "num_base_bdevs": 4, 00:10:27.946 "num_base_bdevs_discovered": 3, 00:10:27.946 "num_base_bdevs_operational": 4, 00:10:27.946 "base_bdevs_list": [ 00:10:27.946 { 00:10:27.946 "name": "BaseBdev1", 00:10:27.946 "uuid": "b4af32ee-7dba-4c68-930d-0410b019d6e8", 00:10:27.946 "is_configured": true, 00:10:27.946 "data_offset": 0, 00:10:27.946 "data_size": 65536 00:10:27.946 }, 00:10:27.946 { 00:10:27.946 "name": null, 00:10:27.946 "uuid": "3b6e3459-152a-4c9d-b6e2-34f27e9748ac", 00:10:27.946 "is_configured": false, 00:10:27.946 "data_offset": 0, 00:10:27.946 "data_size": 65536 00:10:27.946 }, 00:10:27.946 { 00:10:27.946 "name": "BaseBdev3", 00:10:27.946 "uuid": "213ff000-247e-4eb0-9ca6-c5d7ba170157", 00:10:27.946 "is_configured": true, 00:10:27.946 "data_offset": 0, 00:10:27.946 "data_size": 65536 00:10:27.946 }, 00:10:27.946 { 00:10:27.946 "name": "BaseBdev4", 00:10:27.946 "uuid": "a2b9a09c-663f-4bba-ab03-f80d9ed8a4c1", 00:10:27.946 "is_configured": true, 00:10:27.946 "data_offset": 0, 00:10:27.946 "data_size": 65536 00:10:27.946 } 00:10:27.946 ] 00:10:27.946 }' 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.946 01:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.516 [2024-10-15 01:11:41.069215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.516 "name": "Existed_Raid", 00:10:28.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.516 "strip_size_kb": 0, 00:10:28.516 "state": "configuring", 00:10:28.516 "raid_level": "raid1", 00:10:28.516 "superblock": false, 00:10:28.516 "num_base_bdevs": 4, 00:10:28.516 "num_base_bdevs_discovered": 2, 00:10:28.516 "num_base_bdevs_operational": 4, 00:10:28.516 "base_bdevs_list": [ 00:10:28.516 { 00:10:28.516 "name": null, 00:10:28.516 "uuid": "b4af32ee-7dba-4c68-930d-0410b019d6e8", 00:10:28.516 "is_configured": false, 00:10:28.516 "data_offset": 0, 00:10:28.516 "data_size": 65536 00:10:28.516 }, 00:10:28.516 { 00:10:28.516 "name": null, 00:10:28.516 "uuid": "3b6e3459-152a-4c9d-b6e2-34f27e9748ac", 00:10:28.516 "is_configured": false, 00:10:28.516 "data_offset": 0, 00:10:28.516 "data_size": 65536 00:10:28.516 }, 00:10:28.516 { 00:10:28.516 "name": "BaseBdev3", 00:10:28.516 "uuid": "213ff000-247e-4eb0-9ca6-c5d7ba170157", 00:10:28.516 "is_configured": true, 00:10:28.516 "data_offset": 0, 00:10:28.516 "data_size": 65536 00:10:28.516 }, 00:10:28.516 { 00:10:28.516 "name": "BaseBdev4", 00:10:28.516 "uuid": "a2b9a09c-663f-4bba-ab03-f80d9ed8a4c1", 00:10:28.516 "is_configured": true, 00:10:28.516 "data_offset": 0, 00:10:28.516 "data_size": 65536 00:10:28.516 } 00:10:28.516 ] 00:10:28.516 }' 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.516 01:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.087 [2024-10-15 01:11:41.550793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.087 "name": "Existed_Raid", 00:10:29.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.087 "strip_size_kb": 0, 00:10:29.087 "state": "configuring", 00:10:29.087 "raid_level": "raid1", 00:10:29.087 "superblock": false, 00:10:29.087 "num_base_bdevs": 4, 00:10:29.087 "num_base_bdevs_discovered": 3, 00:10:29.087 "num_base_bdevs_operational": 4, 00:10:29.087 "base_bdevs_list": [ 00:10:29.087 { 00:10:29.087 "name": null, 00:10:29.087 "uuid": "b4af32ee-7dba-4c68-930d-0410b019d6e8", 00:10:29.087 "is_configured": false, 00:10:29.087 "data_offset": 0, 00:10:29.087 "data_size": 65536 00:10:29.087 }, 00:10:29.087 { 00:10:29.087 "name": "BaseBdev2", 00:10:29.087 "uuid": "3b6e3459-152a-4c9d-b6e2-34f27e9748ac", 00:10:29.087 "is_configured": true, 00:10:29.087 "data_offset": 0, 00:10:29.087 "data_size": 65536 00:10:29.087 }, 00:10:29.087 { 00:10:29.087 "name": "BaseBdev3", 00:10:29.087 "uuid": "213ff000-247e-4eb0-9ca6-c5d7ba170157", 00:10:29.087 "is_configured": true, 00:10:29.087 "data_offset": 0, 00:10:29.087 "data_size": 65536 00:10:29.087 }, 00:10:29.087 { 00:10:29.087 "name": "BaseBdev4", 00:10:29.087 "uuid": "a2b9a09c-663f-4bba-ab03-f80d9ed8a4c1", 00:10:29.087 "is_configured": true, 00:10:29.087 "data_offset": 0, 00:10:29.087 "data_size": 65536 00:10:29.087 } 00:10:29.087 ] 00:10:29.087 }' 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.087 01:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.347 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.347 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:29.347 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.347 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.347 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.347 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:29.347 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.347 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:29.347 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.347 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b4af32ee-7dba-4c68-930d-0410b019d6e8 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.608 [2024-10-15 01:11:42.124913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:29.608 [2024-10-15 01:11:42.125050] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:29.608 [2024-10-15 01:11:42.125066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:29.608 [2024-10-15 01:11:42.125358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:29.608 [2024-10-15 01:11:42.125499] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:29.608 [2024-10-15 01:11:42.125509] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:29.608 [2024-10-15 01:11:42.125692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.608 NewBaseBdev 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.608 [ 00:10:29.608 { 00:10:29.608 "name": "NewBaseBdev", 00:10:29.608 "aliases": [ 00:10:29.608 "b4af32ee-7dba-4c68-930d-0410b019d6e8" 00:10:29.608 ], 00:10:29.608 "product_name": "Malloc disk", 00:10:29.608 "block_size": 512, 00:10:29.608 "num_blocks": 65536, 00:10:29.608 "uuid": "b4af32ee-7dba-4c68-930d-0410b019d6e8", 00:10:29.608 "assigned_rate_limits": { 00:10:29.608 "rw_ios_per_sec": 0, 00:10:29.608 "rw_mbytes_per_sec": 0, 00:10:29.608 "r_mbytes_per_sec": 0, 00:10:29.608 "w_mbytes_per_sec": 0 00:10:29.608 }, 00:10:29.608 "claimed": true, 00:10:29.608 "claim_type": "exclusive_write", 00:10:29.608 "zoned": false, 00:10:29.608 "supported_io_types": { 00:10:29.608 "read": true, 00:10:29.608 "write": true, 00:10:29.608 "unmap": true, 00:10:29.608 "flush": true, 00:10:29.608 "reset": true, 00:10:29.608 "nvme_admin": false, 00:10:29.608 "nvme_io": false, 00:10:29.608 "nvme_io_md": false, 00:10:29.608 "write_zeroes": true, 00:10:29.608 "zcopy": true, 00:10:29.608 "get_zone_info": false, 00:10:29.608 "zone_management": false, 00:10:29.608 "zone_append": false, 00:10:29.608 "compare": false, 00:10:29.608 "compare_and_write": false, 00:10:29.608 "abort": true, 00:10:29.608 "seek_hole": false, 00:10:29.608 "seek_data": false, 00:10:29.608 "copy": true, 00:10:29.608 "nvme_iov_md": false 00:10:29.608 }, 00:10:29.608 "memory_domains": [ 00:10:29.608 { 00:10:29.608 "dma_device_id": "system", 00:10:29.608 "dma_device_type": 1 00:10:29.608 }, 00:10:29.608 { 00:10:29.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.608 "dma_device_type": 2 00:10:29.608 } 00:10:29.608 ], 00:10:29.608 "driver_specific": {} 00:10:29.608 } 00:10:29.608 ] 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.608 "name": "Existed_Raid", 00:10:29.608 "uuid": "0ca76cfc-4f64-4e77-88ce-9c04df64dd16", 00:10:29.608 "strip_size_kb": 0, 00:10:29.608 "state": "online", 00:10:29.608 "raid_level": "raid1", 00:10:29.608 "superblock": false, 00:10:29.608 "num_base_bdevs": 4, 00:10:29.608 "num_base_bdevs_discovered": 4, 00:10:29.608 "num_base_bdevs_operational": 4, 00:10:29.608 "base_bdevs_list": [ 00:10:29.608 { 00:10:29.608 "name": "NewBaseBdev", 00:10:29.608 "uuid": "b4af32ee-7dba-4c68-930d-0410b019d6e8", 00:10:29.608 "is_configured": true, 00:10:29.608 "data_offset": 0, 00:10:29.608 "data_size": 65536 00:10:29.608 }, 00:10:29.608 { 00:10:29.608 "name": "BaseBdev2", 00:10:29.608 "uuid": "3b6e3459-152a-4c9d-b6e2-34f27e9748ac", 00:10:29.608 "is_configured": true, 00:10:29.608 "data_offset": 0, 00:10:29.608 "data_size": 65536 00:10:29.608 }, 00:10:29.608 { 00:10:29.608 "name": "BaseBdev3", 00:10:29.608 "uuid": "213ff000-247e-4eb0-9ca6-c5d7ba170157", 00:10:29.608 "is_configured": true, 00:10:29.608 "data_offset": 0, 00:10:29.608 "data_size": 65536 00:10:29.608 }, 00:10:29.608 { 00:10:29.608 "name": "BaseBdev4", 00:10:29.608 "uuid": "a2b9a09c-663f-4bba-ab03-f80d9ed8a4c1", 00:10:29.608 "is_configured": true, 00:10:29.608 "data_offset": 0, 00:10:29.608 "data_size": 65536 00:10:29.608 } 00:10:29.608 ] 00:10:29.608 }' 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.608 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:30.179 [2024-10-15 01:11:42.608474] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:30.179 "name": "Existed_Raid", 00:10:30.179 "aliases": [ 00:10:30.179 "0ca76cfc-4f64-4e77-88ce-9c04df64dd16" 00:10:30.179 ], 00:10:30.179 "product_name": "Raid Volume", 00:10:30.179 "block_size": 512, 00:10:30.179 "num_blocks": 65536, 00:10:30.179 "uuid": "0ca76cfc-4f64-4e77-88ce-9c04df64dd16", 00:10:30.179 "assigned_rate_limits": { 00:10:30.179 "rw_ios_per_sec": 0, 00:10:30.179 "rw_mbytes_per_sec": 0, 00:10:30.179 "r_mbytes_per_sec": 0, 00:10:30.179 "w_mbytes_per_sec": 0 00:10:30.179 }, 00:10:30.179 "claimed": false, 00:10:30.179 "zoned": false, 00:10:30.179 "supported_io_types": { 00:10:30.179 "read": true, 00:10:30.179 "write": true, 00:10:30.179 "unmap": false, 00:10:30.179 "flush": false, 00:10:30.179 "reset": true, 00:10:30.179 "nvme_admin": false, 00:10:30.179 "nvme_io": false, 00:10:30.179 "nvme_io_md": false, 00:10:30.179 "write_zeroes": true, 00:10:30.179 "zcopy": false, 00:10:30.179 "get_zone_info": false, 00:10:30.179 "zone_management": false, 00:10:30.179 "zone_append": false, 00:10:30.179 "compare": false, 00:10:30.179 "compare_and_write": false, 00:10:30.179 "abort": false, 00:10:30.179 "seek_hole": false, 00:10:30.179 "seek_data": false, 00:10:30.179 "copy": false, 00:10:30.179 "nvme_iov_md": false 00:10:30.179 }, 00:10:30.179 "memory_domains": [ 00:10:30.179 { 00:10:30.179 "dma_device_id": "system", 00:10:30.179 "dma_device_type": 1 00:10:30.179 }, 00:10:30.179 { 00:10:30.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.179 "dma_device_type": 2 00:10:30.179 }, 00:10:30.179 { 00:10:30.179 "dma_device_id": "system", 00:10:30.179 "dma_device_type": 1 00:10:30.179 }, 00:10:30.179 { 00:10:30.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.179 "dma_device_type": 2 00:10:30.179 }, 00:10:30.179 { 00:10:30.179 "dma_device_id": "system", 00:10:30.179 "dma_device_type": 1 00:10:30.179 }, 00:10:30.179 { 00:10:30.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.179 "dma_device_type": 2 00:10:30.179 }, 00:10:30.179 { 00:10:30.179 "dma_device_id": "system", 00:10:30.179 "dma_device_type": 1 00:10:30.179 }, 00:10:30.179 { 00:10:30.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.179 "dma_device_type": 2 00:10:30.179 } 00:10:30.179 ], 00:10:30.179 "driver_specific": { 00:10:30.179 "raid": { 00:10:30.179 "uuid": "0ca76cfc-4f64-4e77-88ce-9c04df64dd16", 00:10:30.179 "strip_size_kb": 0, 00:10:30.179 "state": "online", 00:10:30.179 "raid_level": "raid1", 00:10:30.179 "superblock": false, 00:10:30.179 "num_base_bdevs": 4, 00:10:30.179 "num_base_bdevs_discovered": 4, 00:10:30.179 "num_base_bdevs_operational": 4, 00:10:30.179 "base_bdevs_list": [ 00:10:30.179 { 00:10:30.179 "name": "NewBaseBdev", 00:10:30.179 "uuid": "b4af32ee-7dba-4c68-930d-0410b019d6e8", 00:10:30.179 "is_configured": true, 00:10:30.179 "data_offset": 0, 00:10:30.179 "data_size": 65536 00:10:30.179 }, 00:10:30.179 { 00:10:30.179 "name": "BaseBdev2", 00:10:30.179 "uuid": "3b6e3459-152a-4c9d-b6e2-34f27e9748ac", 00:10:30.179 "is_configured": true, 00:10:30.179 "data_offset": 0, 00:10:30.179 "data_size": 65536 00:10:30.179 }, 00:10:30.179 { 00:10:30.179 "name": "BaseBdev3", 00:10:30.179 "uuid": "213ff000-247e-4eb0-9ca6-c5d7ba170157", 00:10:30.179 "is_configured": true, 00:10:30.179 "data_offset": 0, 00:10:30.179 "data_size": 65536 00:10:30.179 }, 00:10:30.179 { 00:10:30.179 "name": "BaseBdev4", 00:10:30.179 "uuid": "a2b9a09c-663f-4bba-ab03-f80d9ed8a4c1", 00:10:30.179 "is_configured": true, 00:10:30.179 "data_offset": 0, 00:10:30.179 "data_size": 65536 00:10:30.179 } 00:10:30.179 ] 00:10:30.179 } 00:10:30.179 } 00:10:30.179 }' 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:30.179 BaseBdev2 00:10:30.179 BaseBdev3 00:10:30.179 BaseBdev4' 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.179 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.440 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.440 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.440 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.440 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:30.440 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.440 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.440 [2024-10-15 01:11:42.951566] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:30.440 [2024-10-15 01:11:42.951593] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.440 [2024-10-15 01:11:42.951681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.440 [2024-10-15 01:11:42.951956] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.440 [2024-10-15 01:11:42.951972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:30.440 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.440 01:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83707 00:10:30.440 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 83707 ']' 00:10:30.440 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 83707 00:10:30.440 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:30.440 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:30.440 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83707 00:10:30.440 killing process with pid 83707 00:10:30.440 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:30.440 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:30.440 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83707' 00:10:30.440 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 83707 00:10:30.440 [2024-10-15 01:11:42.998955] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:30.440 01:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 83707 00:10:30.440 [2024-10-15 01:11:43.039374] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:30.701 00:10:30.701 real 0m9.507s 00:10:30.701 user 0m16.328s 00:10:30.701 sys 0m1.986s 00:10:30.701 ************************************ 00:10:30.701 END TEST raid_state_function_test 00:10:30.701 ************************************ 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.701 01:11:43 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:10:30.701 01:11:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:30.701 01:11:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.701 01:11:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:30.701 ************************************ 00:10:30.701 START TEST raid_state_function_test_sb 00:10:30.701 ************************************ 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84362 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84362' 00:10:30.701 Process raid pid: 84362 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84362 00:10:30.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84362 ']' 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:30.701 01:11:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.701 [2024-10-15 01:11:43.417400] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:10:30.701 [2024-10-15 01:11:43.417623] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.961 [2024-10-15 01:11:43.545028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.961 [2024-10-15 01:11:43.570620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.961 [2024-10-15 01:11:43.613240] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.961 [2024-10-15 01:11:43.613365] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.530 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.531 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:31.531 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:31.531 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.531 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.791 [2024-10-15 01:11:44.255219] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.791 [2024-10-15 01:11:44.255271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.791 [2024-10-15 01:11:44.255284] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.791 [2024-10-15 01:11:44.255295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.791 [2024-10-15 01:11:44.255301] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:31.791 [2024-10-15 01:11:44.255314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:31.791 [2024-10-15 01:11:44.255321] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:31.791 [2024-10-15 01:11:44.255329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:31.791 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.791 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:31.791 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.791 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.791 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.791 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.791 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.791 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.791 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.791 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.791 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.791 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.791 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.791 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.791 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.791 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.791 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.791 "name": "Existed_Raid", 00:10:31.791 "uuid": "d5585c27-a8df-4d4b-94e7-9e7e1f1a3e9b", 00:10:31.791 "strip_size_kb": 0, 00:10:31.791 "state": "configuring", 00:10:31.791 "raid_level": "raid1", 00:10:31.791 "superblock": true, 00:10:31.791 "num_base_bdevs": 4, 00:10:31.791 "num_base_bdevs_discovered": 0, 00:10:31.791 "num_base_bdevs_operational": 4, 00:10:31.791 "base_bdevs_list": [ 00:10:31.791 { 00:10:31.791 "name": "BaseBdev1", 00:10:31.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.791 "is_configured": false, 00:10:31.791 "data_offset": 0, 00:10:31.791 "data_size": 0 00:10:31.791 }, 00:10:31.791 { 00:10:31.791 "name": "BaseBdev2", 00:10:31.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.791 "is_configured": false, 00:10:31.791 "data_offset": 0, 00:10:31.791 "data_size": 0 00:10:31.791 }, 00:10:31.791 { 00:10:31.791 "name": "BaseBdev3", 00:10:31.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.791 "is_configured": false, 00:10:31.791 "data_offset": 0, 00:10:31.791 "data_size": 0 00:10:31.791 }, 00:10:31.791 { 00:10:31.791 "name": "BaseBdev4", 00:10:31.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.791 "is_configured": false, 00:10:31.791 "data_offset": 0, 00:10:31.791 "data_size": 0 00:10:31.791 } 00:10:31.791 ] 00:10:31.791 }' 00:10:31.791 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.791 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.051 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:32.051 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.051 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.051 [2024-10-15 01:11:44.766242] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:32.051 [2024-10-15 01:11:44.766283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:32.051 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.051 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:32.051 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.051 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.312 [2024-10-15 01:11:44.778252] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:32.312 [2024-10-15 01:11:44.778294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:32.312 [2024-10-15 01:11:44.778304] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.312 [2024-10-15 01:11:44.778313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:32.312 [2024-10-15 01:11:44.778320] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:32.312 [2024-10-15 01:11:44.778329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:32.312 [2024-10-15 01:11:44.778335] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:32.312 [2024-10-15 01:11:44.778344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.312 [2024-10-15 01:11:44.799114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.312 BaseBdev1 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.312 [ 00:10:32.312 { 00:10:32.312 "name": "BaseBdev1", 00:10:32.312 "aliases": [ 00:10:32.312 "f94d2685-d102-45f2-a39a-1be374751065" 00:10:32.312 ], 00:10:32.312 "product_name": "Malloc disk", 00:10:32.312 "block_size": 512, 00:10:32.312 "num_blocks": 65536, 00:10:32.312 "uuid": "f94d2685-d102-45f2-a39a-1be374751065", 00:10:32.312 "assigned_rate_limits": { 00:10:32.312 "rw_ios_per_sec": 0, 00:10:32.312 "rw_mbytes_per_sec": 0, 00:10:32.312 "r_mbytes_per_sec": 0, 00:10:32.312 "w_mbytes_per_sec": 0 00:10:32.312 }, 00:10:32.312 "claimed": true, 00:10:32.312 "claim_type": "exclusive_write", 00:10:32.312 "zoned": false, 00:10:32.312 "supported_io_types": { 00:10:32.312 "read": true, 00:10:32.312 "write": true, 00:10:32.312 "unmap": true, 00:10:32.312 "flush": true, 00:10:32.312 "reset": true, 00:10:32.312 "nvme_admin": false, 00:10:32.312 "nvme_io": false, 00:10:32.312 "nvme_io_md": false, 00:10:32.312 "write_zeroes": true, 00:10:32.312 "zcopy": true, 00:10:32.312 "get_zone_info": false, 00:10:32.312 "zone_management": false, 00:10:32.312 "zone_append": false, 00:10:32.312 "compare": false, 00:10:32.312 "compare_and_write": false, 00:10:32.312 "abort": true, 00:10:32.312 "seek_hole": false, 00:10:32.312 "seek_data": false, 00:10:32.312 "copy": true, 00:10:32.312 "nvme_iov_md": false 00:10:32.312 }, 00:10:32.312 "memory_domains": [ 00:10:32.312 { 00:10:32.312 "dma_device_id": "system", 00:10:32.312 "dma_device_type": 1 00:10:32.312 }, 00:10:32.312 { 00:10:32.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.312 "dma_device_type": 2 00:10:32.312 } 00:10:32.312 ], 00:10:32.312 "driver_specific": {} 00:10:32.312 } 00:10:32.312 ] 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.312 "name": "Existed_Raid", 00:10:32.312 "uuid": "39ec451e-93d8-46b9-8f47-767c4fd13ae7", 00:10:32.312 "strip_size_kb": 0, 00:10:32.312 "state": "configuring", 00:10:32.312 "raid_level": "raid1", 00:10:32.312 "superblock": true, 00:10:32.312 "num_base_bdevs": 4, 00:10:32.312 "num_base_bdevs_discovered": 1, 00:10:32.312 "num_base_bdevs_operational": 4, 00:10:32.312 "base_bdevs_list": [ 00:10:32.312 { 00:10:32.312 "name": "BaseBdev1", 00:10:32.312 "uuid": "f94d2685-d102-45f2-a39a-1be374751065", 00:10:32.312 "is_configured": true, 00:10:32.312 "data_offset": 2048, 00:10:32.312 "data_size": 63488 00:10:32.312 }, 00:10:32.312 { 00:10:32.312 "name": "BaseBdev2", 00:10:32.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.312 "is_configured": false, 00:10:32.312 "data_offset": 0, 00:10:32.312 "data_size": 0 00:10:32.312 }, 00:10:32.312 { 00:10:32.312 "name": "BaseBdev3", 00:10:32.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.312 "is_configured": false, 00:10:32.312 "data_offset": 0, 00:10:32.312 "data_size": 0 00:10:32.312 }, 00:10:32.312 { 00:10:32.312 "name": "BaseBdev4", 00:10:32.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.312 "is_configured": false, 00:10:32.312 "data_offset": 0, 00:10:32.312 "data_size": 0 00:10:32.312 } 00:10:32.312 ] 00:10:32.312 }' 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.312 01:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.573 [2024-10-15 01:11:45.266364] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:32.573 [2024-10-15 01:11:45.266424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.573 [2024-10-15 01:11:45.278417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.573 [2024-10-15 01:11:45.280295] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.573 [2024-10-15 01:11:45.280336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:32.573 [2024-10-15 01:11:45.280345] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:32.573 [2024-10-15 01:11:45.280353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:32.573 [2024-10-15 01:11:45.280359] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:32.573 [2024-10-15 01:11:45.280367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.573 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.833 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.833 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.833 "name": "Existed_Raid", 00:10:32.833 "uuid": "7448fec4-a633-45a9-ae05-f6e87c934637", 00:10:32.833 "strip_size_kb": 0, 00:10:32.833 "state": "configuring", 00:10:32.833 "raid_level": "raid1", 00:10:32.833 "superblock": true, 00:10:32.833 "num_base_bdevs": 4, 00:10:32.833 "num_base_bdevs_discovered": 1, 00:10:32.833 "num_base_bdevs_operational": 4, 00:10:32.833 "base_bdevs_list": [ 00:10:32.833 { 00:10:32.833 "name": "BaseBdev1", 00:10:32.833 "uuid": "f94d2685-d102-45f2-a39a-1be374751065", 00:10:32.833 "is_configured": true, 00:10:32.833 "data_offset": 2048, 00:10:32.833 "data_size": 63488 00:10:32.833 }, 00:10:32.833 { 00:10:32.833 "name": "BaseBdev2", 00:10:32.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.833 "is_configured": false, 00:10:32.833 "data_offset": 0, 00:10:32.833 "data_size": 0 00:10:32.833 }, 00:10:32.833 { 00:10:32.833 "name": "BaseBdev3", 00:10:32.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.833 "is_configured": false, 00:10:32.833 "data_offset": 0, 00:10:32.833 "data_size": 0 00:10:32.833 }, 00:10:32.833 { 00:10:32.833 "name": "BaseBdev4", 00:10:32.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.833 "is_configured": false, 00:10:32.833 "data_offset": 0, 00:10:32.833 "data_size": 0 00:10:32.833 } 00:10:32.833 ] 00:10:32.833 }' 00:10:32.833 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.833 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.093 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:33.093 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.093 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.093 [2024-10-15 01:11:45.732615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.093 BaseBdev2 00:10:33.093 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.093 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:33.093 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:33.093 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:33.093 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:33.093 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:33.093 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:33.093 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:33.093 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.093 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.094 [ 00:10:33.094 { 00:10:33.094 "name": "BaseBdev2", 00:10:33.094 "aliases": [ 00:10:33.094 "cde4db2c-67c5-4200-86d2-a9b9f9a1bf88" 00:10:33.094 ], 00:10:33.094 "product_name": "Malloc disk", 00:10:33.094 "block_size": 512, 00:10:33.094 "num_blocks": 65536, 00:10:33.094 "uuid": "cde4db2c-67c5-4200-86d2-a9b9f9a1bf88", 00:10:33.094 "assigned_rate_limits": { 00:10:33.094 "rw_ios_per_sec": 0, 00:10:33.094 "rw_mbytes_per_sec": 0, 00:10:33.094 "r_mbytes_per_sec": 0, 00:10:33.094 "w_mbytes_per_sec": 0 00:10:33.094 }, 00:10:33.094 "claimed": true, 00:10:33.094 "claim_type": "exclusive_write", 00:10:33.094 "zoned": false, 00:10:33.094 "supported_io_types": { 00:10:33.094 "read": true, 00:10:33.094 "write": true, 00:10:33.094 "unmap": true, 00:10:33.094 "flush": true, 00:10:33.094 "reset": true, 00:10:33.094 "nvme_admin": false, 00:10:33.094 "nvme_io": false, 00:10:33.094 "nvme_io_md": false, 00:10:33.094 "write_zeroes": true, 00:10:33.094 "zcopy": true, 00:10:33.094 "get_zone_info": false, 00:10:33.094 "zone_management": false, 00:10:33.094 "zone_append": false, 00:10:33.094 "compare": false, 00:10:33.094 "compare_and_write": false, 00:10:33.094 "abort": true, 00:10:33.094 "seek_hole": false, 00:10:33.094 "seek_data": false, 00:10:33.094 "copy": true, 00:10:33.094 "nvme_iov_md": false 00:10:33.094 }, 00:10:33.094 "memory_domains": [ 00:10:33.094 { 00:10:33.094 "dma_device_id": "system", 00:10:33.094 "dma_device_type": 1 00:10:33.094 }, 00:10:33.094 { 00:10:33.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.094 "dma_device_type": 2 00:10:33.094 } 00:10:33.094 ], 00:10:33.094 "driver_specific": {} 00:10:33.094 } 00:10:33.094 ] 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.094 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.354 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.354 "name": "Existed_Raid", 00:10:33.354 "uuid": "7448fec4-a633-45a9-ae05-f6e87c934637", 00:10:33.354 "strip_size_kb": 0, 00:10:33.354 "state": "configuring", 00:10:33.354 "raid_level": "raid1", 00:10:33.354 "superblock": true, 00:10:33.354 "num_base_bdevs": 4, 00:10:33.354 "num_base_bdevs_discovered": 2, 00:10:33.354 "num_base_bdevs_operational": 4, 00:10:33.354 "base_bdevs_list": [ 00:10:33.354 { 00:10:33.354 "name": "BaseBdev1", 00:10:33.354 "uuid": "f94d2685-d102-45f2-a39a-1be374751065", 00:10:33.354 "is_configured": true, 00:10:33.354 "data_offset": 2048, 00:10:33.354 "data_size": 63488 00:10:33.354 }, 00:10:33.354 { 00:10:33.354 "name": "BaseBdev2", 00:10:33.354 "uuid": "cde4db2c-67c5-4200-86d2-a9b9f9a1bf88", 00:10:33.354 "is_configured": true, 00:10:33.354 "data_offset": 2048, 00:10:33.354 "data_size": 63488 00:10:33.354 }, 00:10:33.354 { 00:10:33.354 "name": "BaseBdev3", 00:10:33.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.354 "is_configured": false, 00:10:33.354 "data_offset": 0, 00:10:33.354 "data_size": 0 00:10:33.354 }, 00:10:33.354 { 00:10:33.354 "name": "BaseBdev4", 00:10:33.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.354 "is_configured": false, 00:10:33.354 "data_offset": 0, 00:10:33.354 "data_size": 0 00:10:33.354 } 00:10:33.354 ] 00:10:33.354 }' 00:10:33.354 01:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.354 01:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.614 [2024-10-15 01:11:46.255145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:33.614 BaseBdev3 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.614 [ 00:10:33.614 { 00:10:33.614 "name": "BaseBdev3", 00:10:33.614 "aliases": [ 00:10:33.614 "c1298125-b6f8-481b-8d48-4355b8da63a3" 00:10:33.614 ], 00:10:33.614 "product_name": "Malloc disk", 00:10:33.614 "block_size": 512, 00:10:33.614 "num_blocks": 65536, 00:10:33.614 "uuid": "c1298125-b6f8-481b-8d48-4355b8da63a3", 00:10:33.614 "assigned_rate_limits": { 00:10:33.614 "rw_ios_per_sec": 0, 00:10:33.614 "rw_mbytes_per_sec": 0, 00:10:33.614 "r_mbytes_per_sec": 0, 00:10:33.614 "w_mbytes_per_sec": 0 00:10:33.614 }, 00:10:33.614 "claimed": true, 00:10:33.614 "claim_type": "exclusive_write", 00:10:33.614 "zoned": false, 00:10:33.614 "supported_io_types": { 00:10:33.614 "read": true, 00:10:33.614 "write": true, 00:10:33.614 "unmap": true, 00:10:33.614 "flush": true, 00:10:33.614 "reset": true, 00:10:33.614 "nvme_admin": false, 00:10:33.614 "nvme_io": false, 00:10:33.614 "nvme_io_md": false, 00:10:33.614 "write_zeroes": true, 00:10:33.614 "zcopy": true, 00:10:33.614 "get_zone_info": false, 00:10:33.614 "zone_management": false, 00:10:33.614 "zone_append": false, 00:10:33.614 "compare": false, 00:10:33.614 "compare_and_write": false, 00:10:33.614 "abort": true, 00:10:33.614 "seek_hole": false, 00:10:33.614 "seek_data": false, 00:10:33.614 "copy": true, 00:10:33.614 "nvme_iov_md": false 00:10:33.614 }, 00:10:33.614 "memory_domains": [ 00:10:33.614 { 00:10:33.614 "dma_device_id": "system", 00:10:33.614 "dma_device_type": 1 00:10:33.614 }, 00:10:33.614 { 00:10:33.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.614 "dma_device_type": 2 00:10:33.614 } 00:10:33.614 ], 00:10:33.614 "driver_specific": {} 00:10:33.614 } 00:10:33.614 ] 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.614 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.874 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.874 "name": "Existed_Raid", 00:10:33.874 "uuid": "7448fec4-a633-45a9-ae05-f6e87c934637", 00:10:33.874 "strip_size_kb": 0, 00:10:33.874 "state": "configuring", 00:10:33.874 "raid_level": "raid1", 00:10:33.874 "superblock": true, 00:10:33.874 "num_base_bdevs": 4, 00:10:33.874 "num_base_bdevs_discovered": 3, 00:10:33.874 "num_base_bdevs_operational": 4, 00:10:33.874 "base_bdevs_list": [ 00:10:33.874 { 00:10:33.874 "name": "BaseBdev1", 00:10:33.874 "uuid": "f94d2685-d102-45f2-a39a-1be374751065", 00:10:33.874 "is_configured": true, 00:10:33.874 "data_offset": 2048, 00:10:33.874 "data_size": 63488 00:10:33.874 }, 00:10:33.874 { 00:10:33.874 "name": "BaseBdev2", 00:10:33.874 "uuid": "cde4db2c-67c5-4200-86d2-a9b9f9a1bf88", 00:10:33.874 "is_configured": true, 00:10:33.874 "data_offset": 2048, 00:10:33.874 "data_size": 63488 00:10:33.874 }, 00:10:33.874 { 00:10:33.874 "name": "BaseBdev3", 00:10:33.874 "uuid": "c1298125-b6f8-481b-8d48-4355b8da63a3", 00:10:33.874 "is_configured": true, 00:10:33.874 "data_offset": 2048, 00:10:33.874 "data_size": 63488 00:10:33.874 }, 00:10:33.874 { 00:10:33.874 "name": "BaseBdev4", 00:10:33.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.874 "is_configured": false, 00:10:33.874 "data_offset": 0, 00:10:33.874 "data_size": 0 00:10:33.874 } 00:10:33.874 ] 00:10:33.874 }' 00:10:33.874 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.874 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.134 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:34.134 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.134 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.134 [2024-10-15 01:11:46.797428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:34.134 [2024-10-15 01:11:46.797623] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:34.134 [2024-10-15 01:11:46.797638] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:34.134 [2024-10-15 01:11:46.797927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:34.134 [2024-10-15 01:11:46.798084] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:34.134 [2024-10-15 01:11:46.798099] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:34.134 [2024-10-15 01:11:46.798239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.134 BaseBdev4 00:10:34.134 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.134 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:34.134 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:34.134 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:34.134 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:34.134 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:34.134 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:34.134 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:34.134 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.134 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.134 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.134 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:34.134 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.134 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.134 [ 00:10:34.134 { 00:10:34.134 "name": "BaseBdev4", 00:10:34.134 "aliases": [ 00:10:34.134 "5a2c4516-be4e-4a59-ae23-2eccd1061ed3" 00:10:34.134 ], 00:10:34.134 "product_name": "Malloc disk", 00:10:34.134 "block_size": 512, 00:10:34.134 "num_blocks": 65536, 00:10:34.134 "uuid": "5a2c4516-be4e-4a59-ae23-2eccd1061ed3", 00:10:34.134 "assigned_rate_limits": { 00:10:34.134 "rw_ios_per_sec": 0, 00:10:34.134 "rw_mbytes_per_sec": 0, 00:10:34.134 "r_mbytes_per_sec": 0, 00:10:34.134 "w_mbytes_per_sec": 0 00:10:34.134 }, 00:10:34.134 "claimed": true, 00:10:34.134 "claim_type": "exclusive_write", 00:10:34.134 "zoned": false, 00:10:34.134 "supported_io_types": { 00:10:34.134 "read": true, 00:10:34.134 "write": true, 00:10:34.134 "unmap": true, 00:10:34.134 "flush": true, 00:10:34.134 "reset": true, 00:10:34.134 "nvme_admin": false, 00:10:34.134 "nvme_io": false, 00:10:34.134 "nvme_io_md": false, 00:10:34.134 "write_zeroes": true, 00:10:34.134 "zcopy": true, 00:10:34.134 "get_zone_info": false, 00:10:34.134 "zone_management": false, 00:10:34.134 "zone_append": false, 00:10:34.134 "compare": false, 00:10:34.134 "compare_and_write": false, 00:10:34.134 "abort": true, 00:10:34.134 "seek_hole": false, 00:10:34.134 "seek_data": false, 00:10:34.134 "copy": true, 00:10:34.134 "nvme_iov_md": false 00:10:34.135 }, 00:10:34.135 "memory_domains": [ 00:10:34.135 { 00:10:34.135 "dma_device_id": "system", 00:10:34.135 "dma_device_type": 1 00:10:34.135 }, 00:10:34.135 { 00:10:34.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.135 "dma_device_type": 2 00:10:34.135 } 00:10:34.135 ], 00:10:34.135 "driver_specific": {} 00:10:34.135 } 00:10:34.135 ] 00:10:34.135 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.135 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:34.135 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:34.135 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.135 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:34.135 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.135 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.135 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.135 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.135 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.135 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.135 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.135 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.135 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.135 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.135 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.135 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.135 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.394 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.394 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.394 "name": "Existed_Raid", 00:10:34.394 "uuid": "7448fec4-a633-45a9-ae05-f6e87c934637", 00:10:34.394 "strip_size_kb": 0, 00:10:34.394 "state": "online", 00:10:34.394 "raid_level": "raid1", 00:10:34.394 "superblock": true, 00:10:34.394 "num_base_bdevs": 4, 00:10:34.394 "num_base_bdevs_discovered": 4, 00:10:34.394 "num_base_bdevs_operational": 4, 00:10:34.394 "base_bdevs_list": [ 00:10:34.394 { 00:10:34.394 "name": "BaseBdev1", 00:10:34.394 "uuid": "f94d2685-d102-45f2-a39a-1be374751065", 00:10:34.394 "is_configured": true, 00:10:34.394 "data_offset": 2048, 00:10:34.394 "data_size": 63488 00:10:34.394 }, 00:10:34.394 { 00:10:34.394 "name": "BaseBdev2", 00:10:34.394 "uuid": "cde4db2c-67c5-4200-86d2-a9b9f9a1bf88", 00:10:34.394 "is_configured": true, 00:10:34.394 "data_offset": 2048, 00:10:34.394 "data_size": 63488 00:10:34.394 }, 00:10:34.394 { 00:10:34.394 "name": "BaseBdev3", 00:10:34.394 "uuid": "c1298125-b6f8-481b-8d48-4355b8da63a3", 00:10:34.394 "is_configured": true, 00:10:34.394 "data_offset": 2048, 00:10:34.394 "data_size": 63488 00:10:34.394 }, 00:10:34.394 { 00:10:34.394 "name": "BaseBdev4", 00:10:34.394 "uuid": "5a2c4516-be4e-4a59-ae23-2eccd1061ed3", 00:10:34.394 "is_configured": true, 00:10:34.394 "data_offset": 2048, 00:10:34.394 "data_size": 63488 00:10:34.394 } 00:10:34.394 ] 00:10:34.394 }' 00:10:34.394 01:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.394 01:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.654 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:34.654 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:34.654 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.654 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.654 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.654 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.654 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.654 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:34.654 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.654 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.654 [2024-10-15 01:11:47.312969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.654 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.654 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.654 "name": "Existed_Raid", 00:10:34.654 "aliases": [ 00:10:34.654 "7448fec4-a633-45a9-ae05-f6e87c934637" 00:10:34.654 ], 00:10:34.654 "product_name": "Raid Volume", 00:10:34.654 "block_size": 512, 00:10:34.654 "num_blocks": 63488, 00:10:34.654 "uuid": "7448fec4-a633-45a9-ae05-f6e87c934637", 00:10:34.654 "assigned_rate_limits": { 00:10:34.654 "rw_ios_per_sec": 0, 00:10:34.654 "rw_mbytes_per_sec": 0, 00:10:34.654 "r_mbytes_per_sec": 0, 00:10:34.654 "w_mbytes_per_sec": 0 00:10:34.654 }, 00:10:34.654 "claimed": false, 00:10:34.654 "zoned": false, 00:10:34.654 "supported_io_types": { 00:10:34.654 "read": true, 00:10:34.654 "write": true, 00:10:34.654 "unmap": false, 00:10:34.654 "flush": false, 00:10:34.654 "reset": true, 00:10:34.654 "nvme_admin": false, 00:10:34.654 "nvme_io": false, 00:10:34.654 "nvme_io_md": false, 00:10:34.654 "write_zeroes": true, 00:10:34.654 "zcopy": false, 00:10:34.654 "get_zone_info": false, 00:10:34.654 "zone_management": false, 00:10:34.654 "zone_append": false, 00:10:34.654 "compare": false, 00:10:34.654 "compare_and_write": false, 00:10:34.654 "abort": false, 00:10:34.654 "seek_hole": false, 00:10:34.654 "seek_data": false, 00:10:34.654 "copy": false, 00:10:34.654 "nvme_iov_md": false 00:10:34.654 }, 00:10:34.654 "memory_domains": [ 00:10:34.654 { 00:10:34.654 "dma_device_id": "system", 00:10:34.654 "dma_device_type": 1 00:10:34.654 }, 00:10:34.654 { 00:10:34.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.654 "dma_device_type": 2 00:10:34.654 }, 00:10:34.654 { 00:10:34.654 "dma_device_id": "system", 00:10:34.654 "dma_device_type": 1 00:10:34.654 }, 00:10:34.654 { 00:10:34.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.654 "dma_device_type": 2 00:10:34.654 }, 00:10:34.654 { 00:10:34.654 "dma_device_id": "system", 00:10:34.654 "dma_device_type": 1 00:10:34.654 }, 00:10:34.654 { 00:10:34.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.654 "dma_device_type": 2 00:10:34.654 }, 00:10:34.654 { 00:10:34.654 "dma_device_id": "system", 00:10:34.654 "dma_device_type": 1 00:10:34.654 }, 00:10:34.654 { 00:10:34.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.654 "dma_device_type": 2 00:10:34.654 } 00:10:34.654 ], 00:10:34.654 "driver_specific": { 00:10:34.654 "raid": { 00:10:34.654 "uuid": "7448fec4-a633-45a9-ae05-f6e87c934637", 00:10:34.654 "strip_size_kb": 0, 00:10:34.654 "state": "online", 00:10:34.654 "raid_level": "raid1", 00:10:34.654 "superblock": true, 00:10:34.654 "num_base_bdevs": 4, 00:10:34.654 "num_base_bdevs_discovered": 4, 00:10:34.654 "num_base_bdevs_operational": 4, 00:10:34.654 "base_bdevs_list": [ 00:10:34.654 { 00:10:34.654 "name": "BaseBdev1", 00:10:34.654 "uuid": "f94d2685-d102-45f2-a39a-1be374751065", 00:10:34.654 "is_configured": true, 00:10:34.654 "data_offset": 2048, 00:10:34.654 "data_size": 63488 00:10:34.654 }, 00:10:34.654 { 00:10:34.654 "name": "BaseBdev2", 00:10:34.654 "uuid": "cde4db2c-67c5-4200-86d2-a9b9f9a1bf88", 00:10:34.654 "is_configured": true, 00:10:34.654 "data_offset": 2048, 00:10:34.655 "data_size": 63488 00:10:34.655 }, 00:10:34.655 { 00:10:34.655 "name": "BaseBdev3", 00:10:34.655 "uuid": "c1298125-b6f8-481b-8d48-4355b8da63a3", 00:10:34.655 "is_configured": true, 00:10:34.655 "data_offset": 2048, 00:10:34.655 "data_size": 63488 00:10:34.655 }, 00:10:34.655 { 00:10:34.655 "name": "BaseBdev4", 00:10:34.655 "uuid": "5a2c4516-be4e-4a59-ae23-2eccd1061ed3", 00:10:34.655 "is_configured": true, 00:10:34.655 "data_offset": 2048, 00:10:34.655 "data_size": 63488 00:10:34.655 } 00:10:34.655 ] 00:10:34.655 } 00:10:34.655 } 00:10:34.655 }' 00:10:34.655 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:34.915 BaseBdev2 00:10:34.915 BaseBdev3 00:10:34.915 BaseBdev4' 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.915 [2024-10-15 01:11:47.620192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.915 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.175 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.175 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.175 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.175 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.175 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.175 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.175 "name": "Existed_Raid", 00:10:35.175 "uuid": "7448fec4-a633-45a9-ae05-f6e87c934637", 00:10:35.175 "strip_size_kb": 0, 00:10:35.175 "state": "online", 00:10:35.175 "raid_level": "raid1", 00:10:35.175 "superblock": true, 00:10:35.175 "num_base_bdevs": 4, 00:10:35.175 "num_base_bdevs_discovered": 3, 00:10:35.175 "num_base_bdevs_operational": 3, 00:10:35.175 "base_bdevs_list": [ 00:10:35.175 { 00:10:35.175 "name": null, 00:10:35.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.175 "is_configured": false, 00:10:35.175 "data_offset": 0, 00:10:35.175 "data_size": 63488 00:10:35.175 }, 00:10:35.175 { 00:10:35.175 "name": "BaseBdev2", 00:10:35.175 "uuid": "cde4db2c-67c5-4200-86d2-a9b9f9a1bf88", 00:10:35.175 "is_configured": true, 00:10:35.175 "data_offset": 2048, 00:10:35.175 "data_size": 63488 00:10:35.175 }, 00:10:35.175 { 00:10:35.175 "name": "BaseBdev3", 00:10:35.175 "uuid": "c1298125-b6f8-481b-8d48-4355b8da63a3", 00:10:35.175 "is_configured": true, 00:10:35.175 "data_offset": 2048, 00:10:35.175 "data_size": 63488 00:10:35.175 }, 00:10:35.175 { 00:10:35.175 "name": "BaseBdev4", 00:10:35.175 "uuid": "5a2c4516-be4e-4a59-ae23-2eccd1061ed3", 00:10:35.175 "is_configured": true, 00:10:35.175 "data_offset": 2048, 00:10:35.175 "data_size": 63488 00:10:35.175 } 00:10:35.175 ] 00:10:35.175 }' 00:10:35.175 01:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.175 01:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.469 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:35.469 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.469 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.469 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.469 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.469 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.469 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.469 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.469 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.469 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:35.469 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.469 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.469 [2024-10-15 01:11:48.170505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:35.469 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.469 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.469 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.469 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.469 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.469 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.469 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.728 [2024-10-15 01:11:48.241723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.728 [2024-10-15 01:11:48.308841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:35.728 [2024-10-15 01:11:48.308943] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.728 [2024-10-15 01:11:48.320499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.728 [2024-10-15 01:11:48.320562] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.728 [2024-10-15 01:11:48.320574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.728 BaseBdev2 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.728 [ 00:10:35.728 { 00:10:35.728 "name": "BaseBdev2", 00:10:35.728 "aliases": [ 00:10:35.728 "5bcac404-f8f1-4fe3-a81c-536faf458ffc" 00:10:35.728 ], 00:10:35.728 "product_name": "Malloc disk", 00:10:35.728 "block_size": 512, 00:10:35.728 "num_blocks": 65536, 00:10:35.728 "uuid": "5bcac404-f8f1-4fe3-a81c-536faf458ffc", 00:10:35.728 "assigned_rate_limits": { 00:10:35.728 "rw_ios_per_sec": 0, 00:10:35.728 "rw_mbytes_per_sec": 0, 00:10:35.728 "r_mbytes_per_sec": 0, 00:10:35.728 "w_mbytes_per_sec": 0 00:10:35.728 }, 00:10:35.728 "claimed": false, 00:10:35.728 "zoned": false, 00:10:35.728 "supported_io_types": { 00:10:35.728 "read": true, 00:10:35.728 "write": true, 00:10:35.728 "unmap": true, 00:10:35.728 "flush": true, 00:10:35.728 "reset": true, 00:10:35.728 "nvme_admin": false, 00:10:35.728 "nvme_io": false, 00:10:35.728 "nvme_io_md": false, 00:10:35.728 "write_zeroes": true, 00:10:35.728 "zcopy": true, 00:10:35.728 "get_zone_info": false, 00:10:35.728 "zone_management": false, 00:10:35.728 "zone_append": false, 00:10:35.728 "compare": false, 00:10:35.728 "compare_and_write": false, 00:10:35.728 "abort": true, 00:10:35.728 "seek_hole": false, 00:10:35.728 "seek_data": false, 00:10:35.728 "copy": true, 00:10:35.728 "nvme_iov_md": false 00:10:35.728 }, 00:10:35.728 "memory_domains": [ 00:10:35.728 { 00:10:35.728 "dma_device_id": "system", 00:10:35.728 "dma_device_type": 1 00:10:35.728 }, 00:10:35.728 { 00:10:35.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.728 "dma_device_type": 2 00:10:35.728 } 00:10:35.728 ], 00:10:35.728 "driver_specific": {} 00:10:35.728 } 00:10:35.728 ] 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.728 BaseBdev3 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.728 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.728 [ 00:10:35.728 { 00:10:35.728 "name": "BaseBdev3", 00:10:35.728 "aliases": [ 00:10:35.728 "6809a7c9-8592-4742-9ca3-b167ddd85c92" 00:10:35.728 ], 00:10:35.728 "product_name": "Malloc disk", 00:10:35.728 "block_size": 512, 00:10:35.728 "num_blocks": 65536, 00:10:35.989 "uuid": "6809a7c9-8592-4742-9ca3-b167ddd85c92", 00:10:35.989 "assigned_rate_limits": { 00:10:35.989 "rw_ios_per_sec": 0, 00:10:35.989 "rw_mbytes_per_sec": 0, 00:10:35.989 "r_mbytes_per_sec": 0, 00:10:35.989 "w_mbytes_per_sec": 0 00:10:35.989 }, 00:10:35.989 "claimed": false, 00:10:35.989 "zoned": false, 00:10:35.989 "supported_io_types": { 00:10:35.989 "read": true, 00:10:35.989 "write": true, 00:10:35.989 "unmap": true, 00:10:35.989 "flush": true, 00:10:35.989 "reset": true, 00:10:35.989 "nvme_admin": false, 00:10:35.989 "nvme_io": false, 00:10:35.989 "nvme_io_md": false, 00:10:35.989 "write_zeroes": true, 00:10:35.989 "zcopy": true, 00:10:35.989 "get_zone_info": false, 00:10:35.989 "zone_management": false, 00:10:35.989 "zone_append": false, 00:10:35.989 "compare": false, 00:10:35.989 "compare_and_write": false, 00:10:35.989 "abort": true, 00:10:35.989 "seek_hole": false, 00:10:35.989 "seek_data": false, 00:10:35.989 "copy": true, 00:10:35.989 "nvme_iov_md": false 00:10:35.989 }, 00:10:35.989 "memory_domains": [ 00:10:35.989 { 00:10:35.989 "dma_device_id": "system", 00:10:35.989 "dma_device_type": 1 00:10:35.989 }, 00:10:35.989 { 00:10:35.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.989 "dma_device_type": 2 00:10:35.989 } 00:10:35.989 ], 00:10:35.989 "driver_specific": {} 00:10:35.989 } 00:10:35.989 ] 00:10:35.989 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.989 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:35.989 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:35.989 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.989 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:35.989 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.989 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.989 BaseBdev4 00:10:35.989 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.989 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:35.989 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:35.989 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:35.989 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:35.989 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:35.989 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:35.989 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:35.989 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.990 [ 00:10:35.990 { 00:10:35.990 "name": "BaseBdev4", 00:10:35.990 "aliases": [ 00:10:35.990 "b63b711e-4471-4ef8-99ed-9706b9824919" 00:10:35.990 ], 00:10:35.990 "product_name": "Malloc disk", 00:10:35.990 "block_size": 512, 00:10:35.990 "num_blocks": 65536, 00:10:35.990 "uuid": "b63b711e-4471-4ef8-99ed-9706b9824919", 00:10:35.990 "assigned_rate_limits": { 00:10:35.990 "rw_ios_per_sec": 0, 00:10:35.990 "rw_mbytes_per_sec": 0, 00:10:35.990 "r_mbytes_per_sec": 0, 00:10:35.990 "w_mbytes_per_sec": 0 00:10:35.990 }, 00:10:35.990 "claimed": false, 00:10:35.990 "zoned": false, 00:10:35.990 "supported_io_types": { 00:10:35.990 "read": true, 00:10:35.990 "write": true, 00:10:35.990 "unmap": true, 00:10:35.990 "flush": true, 00:10:35.990 "reset": true, 00:10:35.990 "nvme_admin": false, 00:10:35.990 "nvme_io": false, 00:10:35.990 "nvme_io_md": false, 00:10:35.990 "write_zeroes": true, 00:10:35.990 "zcopy": true, 00:10:35.990 "get_zone_info": false, 00:10:35.990 "zone_management": false, 00:10:35.990 "zone_append": false, 00:10:35.990 "compare": false, 00:10:35.990 "compare_and_write": false, 00:10:35.990 "abort": true, 00:10:35.990 "seek_hole": false, 00:10:35.990 "seek_data": false, 00:10:35.990 "copy": true, 00:10:35.990 "nvme_iov_md": false 00:10:35.990 }, 00:10:35.990 "memory_domains": [ 00:10:35.990 { 00:10:35.990 "dma_device_id": "system", 00:10:35.990 "dma_device_type": 1 00:10:35.990 }, 00:10:35.990 { 00:10:35.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.990 "dma_device_type": 2 00:10:35.990 } 00:10:35.990 ], 00:10:35.990 "driver_specific": {} 00:10:35.990 } 00:10:35.990 ] 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.990 [2024-10-15 01:11:48.518039] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.990 [2024-10-15 01:11:48.518084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.990 [2024-10-15 01:11:48.518104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.990 [2024-10-15 01:11:48.519940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:35.990 [2024-10-15 01:11:48.519991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.990 "name": "Existed_Raid", 00:10:35.990 "uuid": "d19bcce9-d89c-407e-a33a-989c13e0f1bc", 00:10:35.990 "strip_size_kb": 0, 00:10:35.990 "state": "configuring", 00:10:35.990 "raid_level": "raid1", 00:10:35.990 "superblock": true, 00:10:35.990 "num_base_bdevs": 4, 00:10:35.990 "num_base_bdevs_discovered": 3, 00:10:35.990 "num_base_bdevs_operational": 4, 00:10:35.990 "base_bdevs_list": [ 00:10:35.990 { 00:10:35.990 "name": "BaseBdev1", 00:10:35.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.990 "is_configured": false, 00:10:35.990 "data_offset": 0, 00:10:35.990 "data_size": 0 00:10:35.990 }, 00:10:35.990 { 00:10:35.990 "name": "BaseBdev2", 00:10:35.990 "uuid": "5bcac404-f8f1-4fe3-a81c-536faf458ffc", 00:10:35.990 "is_configured": true, 00:10:35.990 "data_offset": 2048, 00:10:35.990 "data_size": 63488 00:10:35.990 }, 00:10:35.990 { 00:10:35.990 "name": "BaseBdev3", 00:10:35.990 "uuid": "6809a7c9-8592-4742-9ca3-b167ddd85c92", 00:10:35.990 "is_configured": true, 00:10:35.990 "data_offset": 2048, 00:10:35.990 "data_size": 63488 00:10:35.990 }, 00:10:35.990 { 00:10:35.990 "name": "BaseBdev4", 00:10:35.990 "uuid": "b63b711e-4471-4ef8-99ed-9706b9824919", 00:10:35.990 "is_configured": true, 00:10:35.990 "data_offset": 2048, 00:10:35.990 "data_size": 63488 00:10:35.990 } 00:10:35.990 ] 00:10:35.990 }' 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.990 01:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.560 [2024-10-15 01:11:49.009194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.560 "name": "Existed_Raid", 00:10:36.560 "uuid": "d19bcce9-d89c-407e-a33a-989c13e0f1bc", 00:10:36.560 "strip_size_kb": 0, 00:10:36.560 "state": "configuring", 00:10:36.560 "raid_level": "raid1", 00:10:36.560 "superblock": true, 00:10:36.560 "num_base_bdevs": 4, 00:10:36.560 "num_base_bdevs_discovered": 2, 00:10:36.560 "num_base_bdevs_operational": 4, 00:10:36.560 "base_bdevs_list": [ 00:10:36.560 { 00:10:36.560 "name": "BaseBdev1", 00:10:36.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.560 "is_configured": false, 00:10:36.560 "data_offset": 0, 00:10:36.560 "data_size": 0 00:10:36.560 }, 00:10:36.560 { 00:10:36.560 "name": null, 00:10:36.560 "uuid": "5bcac404-f8f1-4fe3-a81c-536faf458ffc", 00:10:36.560 "is_configured": false, 00:10:36.560 "data_offset": 0, 00:10:36.560 "data_size": 63488 00:10:36.560 }, 00:10:36.560 { 00:10:36.560 "name": "BaseBdev3", 00:10:36.560 "uuid": "6809a7c9-8592-4742-9ca3-b167ddd85c92", 00:10:36.560 "is_configured": true, 00:10:36.560 "data_offset": 2048, 00:10:36.560 "data_size": 63488 00:10:36.560 }, 00:10:36.560 { 00:10:36.560 "name": "BaseBdev4", 00:10:36.560 "uuid": "b63b711e-4471-4ef8-99ed-9706b9824919", 00:10:36.560 "is_configured": true, 00:10:36.560 "data_offset": 2048, 00:10:36.560 "data_size": 63488 00:10:36.560 } 00:10:36.560 ] 00:10:36.560 }' 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.560 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.821 [2024-10-15 01:11:49.439441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.821 BaseBdev1 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.821 [ 00:10:36.821 { 00:10:36.821 "name": "BaseBdev1", 00:10:36.821 "aliases": [ 00:10:36.821 "f6e6c55e-545f-4b46-a642-9503b907b96e" 00:10:36.821 ], 00:10:36.821 "product_name": "Malloc disk", 00:10:36.821 "block_size": 512, 00:10:36.821 "num_blocks": 65536, 00:10:36.821 "uuid": "f6e6c55e-545f-4b46-a642-9503b907b96e", 00:10:36.821 "assigned_rate_limits": { 00:10:36.821 "rw_ios_per_sec": 0, 00:10:36.821 "rw_mbytes_per_sec": 0, 00:10:36.821 "r_mbytes_per_sec": 0, 00:10:36.821 "w_mbytes_per_sec": 0 00:10:36.821 }, 00:10:36.821 "claimed": true, 00:10:36.821 "claim_type": "exclusive_write", 00:10:36.821 "zoned": false, 00:10:36.821 "supported_io_types": { 00:10:36.821 "read": true, 00:10:36.821 "write": true, 00:10:36.821 "unmap": true, 00:10:36.821 "flush": true, 00:10:36.821 "reset": true, 00:10:36.821 "nvme_admin": false, 00:10:36.821 "nvme_io": false, 00:10:36.821 "nvme_io_md": false, 00:10:36.821 "write_zeroes": true, 00:10:36.821 "zcopy": true, 00:10:36.821 "get_zone_info": false, 00:10:36.821 "zone_management": false, 00:10:36.821 "zone_append": false, 00:10:36.821 "compare": false, 00:10:36.821 "compare_and_write": false, 00:10:36.821 "abort": true, 00:10:36.821 "seek_hole": false, 00:10:36.821 "seek_data": false, 00:10:36.821 "copy": true, 00:10:36.821 "nvme_iov_md": false 00:10:36.821 }, 00:10:36.821 "memory_domains": [ 00:10:36.821 { 00:10:36.821 "dma_device_id": "system", 00:10:36.821 "dma_device_type": 1 00:10:36.821 }, 00:10:36.821 { 00:10:36.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.821 "dma_device_type": 2 00:10:36.821 } 00:10:36.821 ], 00:10:36.821 "driver_specific": {} 00:10:36.821 } 00:10:36.821 ] 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.821 "name": "Existed_Raid", 00:10:36.821 "uuid": "d19bcce9-d89c-407e-a33a-989c13e0f1bc", 00:10:36.821 "strip_size_kb": 0, 00:10:36.821 "state": "configuring", 00:10:36.821 "raid_level": "raid1", 00:10:36.821 "superblock": true, 00:10:36.821 "num_base_bdevs": 4, 00:10:36.821 "num_base_bdevs_discovered": 3, 00:10:36.821 "num_base_bdevs_operational": 4, 00:10:36.821 "base_bdevs_list": [ 00:10:36.821 { 00:10:36.821 "name": "BaseBdev1", 00:10:36.821 "uuid": "f6e6c55e-545f-4b46-a642-9503b907b96e", 00:10:36.821 "is_configured": true, 00:10:36.821 "data_offset": 2048, 00:10:36.821 "data_size": 63488 00:10:36.821 }, 00:10:36.821 { 00:10:36.821 "name": null, 00:10:36.821 "uuid": "5bcac404-f8f1-4fe3-a81c-536faf458ffc", 00:10:36.821 "is_configured": false, 00:10:36.821 "data_offset": 0, 00:10:36.821 "data_size": 63488 00:10:36.821 }, 00:10:36.821 { 00:10:36.821 "name": "BaseBdev3", 00:10:36.821 "uuid": "6809a7c9-8592-4742-9ca3-b167ddd85c92", 00:10:36.821 "is_configured": true, 00:10:36.821 "data_offset": 2048, 00:10:36.821 "data_size": 63488 00:10:36.821 }, 00:10:36.821 { 00:10:36.821 "name": "BaseBdev4", 00:10:36.821 "uuid": "b63b711e-4471-4ef8-99ed-9706b9824919", 00:10:36.821 "is_configured": true, 00:10:36.821 "data_offset": 2048, 00:10:36.821 "data_size": 63488 00:10:36.821 } 00:10:36.821 ] 00:10:36.821 }' 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.821 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.390 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:37.390 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.390 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.390 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.390 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.391 [2024-10-15 01:11:49.918690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.391 "name": "Existed_Raid", 00:10:37.391 "uuid": "d19bcce9-d89c-407e-a33a-989c13e0f1bc", 00:10:37.391 "strip_size_kb": 0, 00:10:37.391 "state": "configuring", 00:10:37.391 "raid_level": "raid1", 00:10:37.391 "superblock": true, 00:10:37.391 "num_base_bdevs": 4, 00:10:37.391 "num_base_bdevs_discovered": 2, 00:10:37.391 "num_base_bdevs_operational": 4, 00:10:37.391 "base_bdevs_list": [ 00:10:37.391 { 00:10:37.391 "name": "BaseBdev1", 00:10:37.391 "uuid": "f6e6c55e-545f-4b46-a642-9503b907b96e", 00:10:37.391 "is_configured": true, 00:10:37.391 "data_offset": 2048, 00:10:37.391 "data_size": 63488 00:10:37.391 }, 00:10:37.391 { 00:10:37.391 "name": null, 00:10:37.391 "uuid": "5bcac404-f8f1-4fe3-a81c-536faf458ffc", 00:10:37.391 "is_configured": false, 00:10:37.391 "data_offset": 0, 00:10:37.391 "data_size": 63488 00:10:37.391 }, 00:10:37.391 { 00:10:37.391 "name": null, 00:10:37.391 "uuid": "6809a7c9-8592-4742-9ca3-b167ddd85c92", 00:10:37.391 "is_configured": false, 00:10:37.391 "data_offset": 0, 00:10:37.391 "data_size": 63488 00:10:37.391 }, 00:10:37.391 { 00:10:37.391 "name": "BaseBdev4", 00:10:37.391 "uuid": "b63b711e-4471-4ef8-99ed-9706b9824919", 00:10:37.391 "is_configured": true, 00:10:37.391 "data_offset": 2048, 00:10:37.391 "data_size": 63488 00:10:37.391 } 00:10:37.391 ] 00:10:37.391 }' 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.391 01:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.650 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.650 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:37.650 01:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.650 01:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.650 01:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.910 [2024-10-15 01:11:50.389913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.910 "name": "Existed_Raid", 00:10:37.910 "uuid": "d19bcce9-d89c-407e-a33a-989c13e0f1bc", 00:10:37.910 "strip_size_kb": 0, 00:10:37.910 "state": "configuring", 00:10:37.910 "raid_level": "raid1", 00:10:37.910 "superblock": true, 00:10:37.910 "num_base_bdevs": 4, 00:10:37.910 "num_base_bdevs_discovered": 3, 00:10:37.910 "num_base_bdevs_operational": 4, 00:10:37.910 "base_bdevs_list": [ 00:10:37.910 { 00:10:37.910 "name": "BaseBdev1", 00:10:37.910 "uuid": "f6e6c55e-545f-4b46-a642-9503b907b96e", 00:10:37.910 "is_configured": true, 00:10:37.910 "data_offset": 2048, 00:10:37.910 "data_size": 63488 00:10:37.910 }, 00:10:37.910 { 00:10:37.910 "name": null, 00:10:37.910 "uuid": "5bcac404-f8f1-4fe3-a81c-536faf458ffc", 00:10:37.910 "is_configured": false, 00:10:37.910 "data_offset": 0, 00:10:37.910 "data_size": 63488 00:10:37.910 }, 00:10:37.910 { 00:10:37.910 "name": "BaseBdev3", 00:10:37.910 "uuid": "6809a7c9-8592-4742-9ca3-b167ddd85c92", 00:10:37.910 "is_configured": true, 00:10:37.910 "data_offset": 2048, 00:10:37.910 "data_size": 63488 00:10:37.910 }, 00:10:37.910 { 00:10:37.910 "name": "BaseBdev4", 00:10:37.910 "uuid": "b63b711e-4471-4ef8-99ed-9706b9824919", 00:10:37.910 "is_configured": true, 00:10:37.910 "data_offset": 2048, 00:10:37.910 "data_size": 63488 00:10:37.910 } 00:10:37.910 ] 00:10:37.910 }' 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.910 01:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.170 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.170 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:38.170 01:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.170 01:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.170 01:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.430 [2024-10-15 01:11:50.909093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.430 "name": "Existed_Raid", 00:10:38.430 "uuid": "d19bcce9-d89c-407e-a33a-989c13e0f1bc", 00:10:38.430 "strip_size_kb": 0, 00:10:38.430 "state": "configuring", 00:10:38.430 "raid_level": "raid1", 00:10:38.430 "superblock": true, 00:10:38.430 "num_base_bdevs": 4, 00:10:38.430 "num_base_bdevs_discovered": 2, 00:10:38.430 "num_base_bdevs_operational": 4, 00:10:38.430 "base_bdevs_list": [ 00:10:38.430 { 00:10:38.430 "name": null, 00:10:38.430 "uuid": "f6e6c55e-545f-4b46-a642-9503b907b96e", 00:10:38.430 "is_configured": false, 00:10:38.430 "data_offset": 0, 00:10:38.430 "data_size": 63488 00:10:38.430 }, 00:10:38.430 { 00:10:38.430 "name": null, 00:10:38.430 "uuid": "5bcac404-f8f1-4fe3-a81c-536faf458ffc", 00:10:38.430 "is_configured": false, 00:10:38.430 "data_offset": 0, 00:10:38.430 "data_size": 63488 00:10:38.430 }, 00:10:38.430 { 00:10:38.430 "name": "BaseBdev3", 00:10:38.430 "uuid": "6809a7c9-8592-4742-9ca3-b167ddd85c92", 00:10:38.430 "is_configured": true, 00:10:38.430 "data_offset": 2048, 00:10:38.430 "data_size": 63488 00:10:38.430 }, 00:10:38.430 { 00:10:38.430 "name": "BaseBdev4", 00:10:38.430 "uuid": "b63b711e-4471-4ef8-99ed-9706b9824919", 00:10:38.430 "is_configured": true, 00:10:38.430 "data_offset": 2048, 00:10:38.430 "data_size": 63488 00:10:38.430 } 00:10:38.430 ] 00:10:38.430 }' 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.430 01:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.690 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:38.690 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.690 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.690 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.690 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.690 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:38.690 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:38.690 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.690 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.690 [2024-10-15 01:11:51.378768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.690 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.691 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:38.691 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.691 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.691 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.691 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.691 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.691 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.691 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.691 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.691 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.691 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.691 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.691 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.691 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.691 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.950 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.950 "name": "Existed_Raid", 00:10:38.950 "uuid": "d19bcce9-d89c-407e-a33a-989c13e0f1bc", 00:10:38.950 "strip_size_kb": 0, 00:10:38.950 "state": "configuring", 00:10:38.950 "raid_level": "raid1", 00:10:38.950 "superblock": true, 00:10:38.950 "num_base_bdevs": 4, 00:10:38.950 "num_base_bdevs_discovered": 3, 00:10:38.950 "num_base_bdevs_operational": 4, 00:10:38.950 "base_bdevs_list": [ 00:10:38.950 { 00:10:38.950 "name": null, 00:10:38.950 "uuid": "f6e6c55e-545f-4b46-a642-9503b907b96e", 00:10:38.950 "is_configured": false, 00:10:38.950 "data_offset": 0, 00:10:38.950 "data_size": 63488 00:10:38.950 }, 00:10:38.950 { 00:10:38.950 "name": "BaseBdev2", 00:10:38.950 "uuid": "5bcac404-f8f1-4fe3-a81c-536faf458ffc", 00:10:38.950 "is_configured": true, 00:10:38.950 "data_offset": 2048, 00:10:38.950 "data_size": 63488 00:10:38.950 }, 00:10:38.950 { 00:10:38.950 "name": "BaseBdev3", 00:10:38.950 "uuid": "6809a7c9-8592-4742-9ca3-b167ddd85c92", 00:10:38.950 "is_configured": true, 00:10:38.950 "data_offset": 2048, 00:10:38.950 "data_size": 63488 00:10:38.950 }, 00:10:38.950 { 00:10:38.950 "name": "BaseBdev4", 00:10:38.950 "uuid": "b63b711e-4471-4ef8-99ed-9706b9824919", 00:10:38.950 "is_configured": true, 00:10:38.950 "data_offset": 2048, 00:10:38.950 "data_size": 63488 00:10:38.950 } 00:10:38.950 ] 00:10:38.950 }' 00:10:38.951 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.951 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.210 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.210 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.210 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.210 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:39.210 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.210 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:39.210 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.210 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.210 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.210 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:39.210 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.210 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f6e6c55e-545f-4b46-a642-9503b907b96e 00:10:39.210 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.210 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.210 [2024-10-15 01:11:51.916809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:39.210 [2024-10-15 01:11:51.916978] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:39.210 [2024-10-15 01:11:51.916994] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:39.210 [2024-10-15 01:11:51.917272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:39.210 [2024-10-15 01:11:51.917417] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:39.210 [2024-10-15 01:11:51.917435] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:39.210 [2024-10-15 01:11:51.917535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.210 NewBaseBdev 00:10:39.210 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.210 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:39.210 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:39.210 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:39.210 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:39.211 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:39.211 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:39.211 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:39.211 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.211 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.211 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.211 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:39.211 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.211 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.471 [ 00:10:39.471 { 00:10:39.471 "name": "NewBaseBdev", 00:10:39.471 "aliases": [ 00:10:39.471 "f6e6c55e-545f-4b46-a642-9503b907b96e" 00:10:39.471 ], 00:10:39.471 "product_name": "Malloc disk", 00:10:39.471 "block_size": 512, 00:10:39.471 "num_blocks": 65536, 00:10:39.471 "uuid": "f6e6c55e-545f-4b46-a642-9503b907b96e", 00:10:39.471 "assigned_rate_limits": { 00:10:39.471 "rw_ios_per_sec": 0, 00:10:39.471 "rw_mbytes_per_sec": 0, 00:10:39.471 "r_mbytes_per_sec": 0, 00:10:39.471 "w_mbytes_per_sec": 0 00:10:39.471 }, 00:10:39.471 "claimed": true, 00:10:39.471 "claim_type": "exclusive_write", 00:10:39.471 "zoned": false, 00:10:39.471 "supported_io_types": { 00:10:39.471 "read": true, 00:10:39.471 "write": true, 00:10:39.471 "unmap": true, 00:10:39.471 "flush": true, 00:10:39.471 "reset": true, 00:10:39.471 "nvme_admin": false, 00:10:39.471 "nvme_io": false, 00:10:39.471 "nvme_io_md": false, 00:10:39.471 "write_zeroes": true, 00:10:39.471 "zcopy": true, 00:10:39.471 "get_zone_info": false, 00:10:39.471 "zone_management": false, 00:10:39.471 "zone_append": false, 00:10:39.471 "compare": false, 00:10:39.471 "compare_and_write": false, 00:10:39.471 "abort": true, 00:10:39.471 "seek_hole": false, 00:10:39.471 "seek_data": false, 00:10:39.471 "copy": true, 00:10:39.471 "nvme_iov_md": false 00:10:39.471 }, 00:10:39.471 "memory_domains": [ 00:10:39.471 { 00:10:39.471 "dma_device_id": "system", 00:10:39.471 "dma_device_type": 1 00:10:39.471 }, 00:10:39.471 { 00:10:39.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.471 "dma_device_type": 2 00:10:39.471 } 00:10:39.471 ], 00:10:39.471 "driver_specific": {} 00:10:39.471 } 00:10:39.471 ] 00:10:39.471 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.471 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:39.471 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:39.471 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.471 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.471 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.471 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.471 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.471 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.471 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.471 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.471 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.471 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.471 01:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.471 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.471 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.471 01:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.471 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.471 "name": "Existed_Raid", 00:10:39.471 "uuid": "d19bcce9-d89c-407e-a33a-989c13e0f1bc", 00:10:39.471 "strip_size_kb": 0, 00:10:39.471 "state": "online", 00:10:39.471 "raid_level": "raid1", 00:10:39.471 "superblock": true, 00:10:39.471 "num_base_bdevs": 4, 00:10:39.471 "num_base_bdevs_discovered": 4, 00:10:39.471 "num_base_bdevs_operational": 4, 00:10:39.471 "base_bdevs_list": [ 00:10:39.471 { 00:10:39.471 "name": "NewBaseBdev", 00:10:39.471 "uuid": "f6e6c55e-545f-4b46-a642-9503b907b96e", 00:10:39.471 "is_configured": true, 00:10:39.471 "data_offset": 2048, 00:10:39.471 "data_size": 63488 00:10:39.471 }, 00:10:39.471 { 00:10:39.471 "name": "BaseBdev2", 00:10:39.471 "uuid": "5bcac404-f8f1-4fe3-a81c-536faf458ffc", 00:10:39.471 "is_configured": true, 00:10:39.471 "data_offset": 2048, 00:10:39.471 "data_size": 63488 00:10:39.471 }, 00:10:39.471 { 00:10:39.471 "name": "BaseBdev3", 00:10:39.471 "uuid": "6809a7c9-8592-4742-9ca3-b167ddd85c92", 00:10:39.471 "is_configured": true, 00:10:39.471 "data_offset": 2048, 00:10:39.471 "data_size": 63488 00:10:39.471 }, 00:10:39.471 { 00:10:39.471 "name": "BaseBdev4", 00:10:39.471 "uuid": "b63b711e-4471-4ef8-99ed-9706b9824919", 00:10:39.471 "is_configured": true, 00:10:39.471 "data_offset": 2048, 00:10:39.471 "data_size": 63488 00:10:39.471 } 00:10:39.471 ] 00:10:39.471 }' 00:10:39.471 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.471 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.731 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:39.731 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:39.731 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:39.731 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:39.731 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:39.731 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:39.731 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:39.731 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:39.731 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.731 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.731 [2024-10-15 01:11:52.436314] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:39.992 "name": "Existed_Raid", 00:10:39.992 "aliases": [ 00:10:39.992 "d19bcce9-d89c-407e-a33a-989c13e0f1bc" 00:10:39.992 ], 00:10:39.992 "product_name": "Raid Volume", 00:10:39.992 "block_size": 512, 00:10:39.992 "num_blocks": 63488, 00:10:39.992 "uuid": "d19bcce9-d89c-407e-a33a-989c13e0f1bc", 00:10:39.992 "assigned_rate_limits": { 00:10:39.992 "rw_ios_per_sec": 0, 00:10:39.992 "rw_mbytes_per_sec": 0, 00:10:39.992 "r_mbytes_per_sec": 0, 00:10:39.992 "w_mbytes_per_sec": 0 00:10:39.992 }, 00:10:39.992 "claimed": false, 00:10:39.992 "zoned": false, 00:10:39.992 "supported_io_types": { 00:10:39.992 "read": true, 00:10:39.992 "write": true, 00:10:39.992 "unmap": false, 00:10:39.992 "flush": false, 00:10:39.992 "reset": true, 00:10:39.992 "nvme_admin": false, 00:10:39.992 "nvme_io": false, 00:10:39.992 "nvme_io_md": false, 00:10:39.992 "write_zeroes": true, 00:10:39.992 "zcopy": false, 00:10:39.992 "get_zone_info": false, 00:10:39.992 "zone_management": false, 00:10:39.992 "zone_append": false, 00:10:39.992 "compare": false, 00:10:39.992 "compare_and_write": false, 00:10:39.992 "abort": false, 00:10:39.992 "seek_hole": false, 00:10:39.992 "seek_data": false, 00:10:39.992 "copy": false, 00:10:39.992 "nvme_iov_md": false 00:10:39.992 }, 00:10:39.992 "memory_domains": [ 00:10:39.992 { 00:10:39.992 "dma_device_id": "system", 00:10:39.992 "dma_device_type": 1 00:10:39.992 }, 00:10:39.992 { 00:10:39.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.992 "dma_device_type": 2 00:10:39.992 }, 00:10:39.992 { 00:10:39.992 "dma_device_id": "system", 00:10:39.992 "dma_device_type": 1 00:10:39.992 }, 00:10:39.992 { 00:10:39.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.992 "dma_device_type": 2 00:10:39.992 }, 00:10:39.992 { 00:10:39.992 "dma_device_id": "system", 00:10:39.992 "dma_device_type": 1 00:10:39.992 }, 00:10:39.992 { 00:10:39.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.992 "dma_device_type": 2 00:10:39.992 }, 00:10:39.992 { 00:10:39.992 "dma_device_id": "system", 00:10:39.992 "dma_device_type": 1 00:10:39.992 }, 00:10:39.992 { 00:10:39.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.992 "dma_device_type": 2 00:10:39.992 } 00:10:39.992 ], 00:10:39.992 "driver_specific": { 00:10:39.992 "raid": { 00:10:39.992 "uuid": "d19bcce9-d89c-407e-a33a-989c13e0f1bc", 00:10:39.992 "strip_size_kb": 0, 00:10:39.992 "state": "online", 00:10:39.992 "raid_level": "raid1", 00:10:39.992 "superblock": true, 00:10:39.992 "num_base_bdevs": 4, 00:10:39.992 "num_base_bdevs_discovered": 4, 00:10:39.992 "num_base_bdevs_operational": 4, 00:10:39.992 "base_bdevs_list": [ 00:10:39.992 { 00:10:39.992 "name": "NewBaseBdev", 00:10:39.992 "uuid": "f6e6c55e-545f-4b46-a642-9503b907b96e", 00:10:39.992 "is_configured": true, 00:10:39.992 "data_offset": 2048, 00:10:39.992 "data_size": 63488 00:10:39.992 }, 00:10:39.992 { 00:10:39.992 "name": "BaseBdev2", 00:10:39.992 "uuid": "5bcac404-f8f1-4fe3-a81c-536faf458ffc", 00:10:39.992 "is_configured": true, 00:10:39.992 "data_offset": 2048, 00:10:39.992 "data_size": 63488 00:10:39.992 }, 00:10:39.992 { 00:10:39.992 "name": "BaseBdev3", 00:10:39.992 "uuid": "6809a7c9-8592-4742-9ca3-b167ddd85c92", 00:10:39.992 "is_configured": true, 00:10:39.992 "data_offset": 2048, 00:10:39.992 "data_size": 63488 00:10:39.992 }, 00:10:39.992 { 00:10:39.992 "name": "BaseBdev4", 00:10:39.992 "uuid": "b63b711e-4471-4ef8-99ed-9706b9824919", 00:10:39.992 "is_configured": true, 00:10:39.992 "data_offset": 2048, 00:10:39.992 "data_size": 63488 00:10:39.992 } 00:10:39.992 ] 00:10:39.992 } 00:10:39.992 } 00:10:39.992 }' 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:39.992 BaseBdev2 00:10:39.992 BaseBdev3 00:10:39.992 BaseBdev4' 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.992 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.253 [2024-10-15 01:11:52.719501] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:40.253 [2024-10-15 01:11:52.719530] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.253 [2024-10-15 01:11:52.719614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.253 [2024-10-15 01:11:52.719876] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:40.253 [2024-10-15 01:11:52.719901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:40.253 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.253 01:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84362 00:10:40.253 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84362 ']' 00:10:40.253 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84362 00:10:40.253 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:40.253 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:40.253 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84362 00:10:40.253 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:40.253 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:40.253 killing process with pid 84362 00:10:40.253 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84362' 00:10:40.253 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84362 00:10:40.253 [2024-10-15 01:11:52.752361] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:40.253 01:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84362 00:10:40.253 [2024-10-15 01:11:52.792883] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:40.513 01:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:40.513 00:10:40.513 real 0m9.681s 00:10:40.513 user 0m16.710s 00:10:40.513 sys 0m1.968s 00:10:40.513 01:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:40.513 01:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.513 ************************************ 00:10:40.513 END TEST raid_state_function_test_sb 00:10:40.513 ************************************ 00:10:40.513 01:11:53 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:10:40.513 01:11:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:40.513 01:11:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:40.513 01:11:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:40.513 ************************************ 00:10:40.513 START TEST raid_superblock_test 00:10:40.513 ************************************ 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85010 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85010 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 85010 ']' 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.513 01:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.513 [2024-10-15 01:11:53.160121] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:10:40.513 [2024-10-15 01:11:53.160246] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85010 ] 00:10:40.773 [2024-10-15 01:11:53.304223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.773 [2024-10-15 01:11:53.330782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.773 [2024-10-15 01:11:53.373422] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.773 [2024-10-15 01:11:53.373463] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.343 01:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.343 01:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:41.343 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:41.343 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:41.343 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:41.343 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:41.343 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:41.343 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:41.343 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:41.343 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:41.343 01:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:41.343 01:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.343 01:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.343 malloc1 00:10:41.343 01:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.343 [2024-10-15 01:11:54.007998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:41.343 [2024-10-15 01:11:54.008068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.343 [2024-10-15 01:11:54.008088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:41.343 [2024-10-15 01:11:54.008099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.343 [2024-10-15 01:11:54.010302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.343 [2024-10-15 01:11:54.010336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:41.343 pt1 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.343 malloc2 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.343 [2024-10-15 01:11:54.036547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:41.343 [2024-10-15 01:11:54.036614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.343 [2024-10-15 01:11:54.036630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:41.343 [2024-10-15 01:11:54.036641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.343 [2024-10-15 01:11:54.038684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.343 [2024-10-15 01:11:54.038717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:41.343 pt2 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:41.343 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:41.344 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:41.344 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:41.344 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:41.344 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:41.344 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:41.344 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:41.344 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:41.344 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.344 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.344 malloc3 00:10:41.344 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.344 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:41.344 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.344 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.344 [2024-10-15 01:11:54.065123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:41.344 [2024-10-15 01:11:54.065202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.344 [2024-10-15 01:11:54.065220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:41.344 [2024-10-15 01:11:54.065230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.604 [2024-10-15 01:11:54.067295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.604 [2024-10-15 01:11:54.067327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:41.604 pt3 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.604 malloc4 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.604 [2024-10-15 01:11:54.112121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:41.604 [2024-10-15 01:11:54.112243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.604 [2024-10-15 01:11:54.112278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:41.604 [2024-10-15 01:11:54.112303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.604 [2024-10-15 01:11:54.116399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.604 [2024-10-15 01:11:54.116465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:41.604 pt4 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.604 [2024-10-15 01:11:54.124694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:41.604 [2024-10-15 01:11:54.127190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:41.604 [2024-10-15 01:11:54.127291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:41.604 [2024-10-15 01:11:54.127355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:41.604 [2024-10-15 01:11:54.127580] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:41.604 [2024-10-15 01:11:54.127608] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:41.604 [2024-10-15 01:11:54.127971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:41.604 [2024-10-15 01:11:54.128199] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:41.604 [2024-10-15 01:11:54.128223] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:41.604 [2024-10-15 01:11:54.128394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.604 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.605 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.605 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.605 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.605 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.605 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.605 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.605 "name": "raid_bdev1", 00:10:41.605 "uuid": "8d3ad193-080e-4620-801c-033d380268ae", 00:10:41.605 "strip_size_kb": 0, 00:10:41.605 "state": "online", 00:10:41.605 "raid_level": "raid1", 00:10:41.605 "superblock": true, 00:10:41.605 "num_base_bdevs": 4, 00:10:41.605 "num_base_bdevs_discovered": 4, 00:10:41.605 "num_base_bdevs_operational": 4, 00:10:41.605 "base_bdevs_list": [ 00:10:41.605 { 00:10:41.605 "name": "pt1", 00:10:41.605 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:41.605 "is_configured": true, 00:10:41.605 "data_offset": 2048, 00:10:41.605 "data_size": 63488 00:10:41.605 }, 00:10:41.605 { 00:10:41.605 "name": "pt2", 00:10:41.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:41.605 "is_configured": true, 00:10:41.605 "data_offset": 2048, 00:10:41.605 "data_size": 63488 00:10:41.605 }, 00:10:41.605 { 00:10:41.605 "name": "pt3", 00:10:41.605 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:41.605 "is_configured": true, 00:10:41.605 "data_offset": 2048, 00:10:41.605 "data_size": 63488 00:10:41.605 }, 00:10:41.605 { 00:10:41.605 "name": "pt4", 00:10:41.605 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:41.605 "is_configured": true, 00:10:41.605 "data_offset": 2048, 00:10:41.605 "data_size": 63488 00:10:41.605 } 00:10:41.605 ] 00:10:41.605 }' 00:10:41.605 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.605 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.864 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:41.864 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:41.864 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:41.864 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:41.864 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:41.864 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:41.864 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:41.864 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:41.864 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.864 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.124 [2024-10-15 01:11:54.592165] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.124 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.124 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:42.124 "name": "raid_bdev1", 00:10:42.124 "aliases": [ 00:10:42.125 "8d3ad193-080e-4620-801c-033d380268ae" 00:10:42.125 ], 00:10:42.125 "product_name": "Raid Volume", 00:10:42.125 "block_size": 512, 00:10:42.125 "num_blocks": 63488, 00:10:42.125 "uuid": "8d3ad193-080e-4620-801c-033d380268ae", 00:10:42.125 "assigned_rate_limits": { 00:10:42.125 "rw_ios_per_sec": 0, 00:10:42.125 "rw_mbytes_per_sec": 0, 00:10:42.125 "r_mbytes_per_sec": 0, 00:10:42.125 "w_mbytes_per_sec": 0 00:10:42.125 }, 00:10:42.125 "claimed": false, 00:10:42.125 "zoned": false, 00:10:42.125 "supported_io_types": { 00:10:42.125 "read": true, 00:10:42.125 "write": true, 00:10:42.125 "unmap": false, 00:10:42.125 "flush": false, 00:10:42.125 "reset": true, 00:10:42.125 "nvme_admin": false, 00:10:42.125 "nvme_io": false, 00:10:42.125 "nvme_io_md": false, 00:10:42.125 "write_zeroes": true, 00:10:42.125 "zcopy": false, 00:10:42.125 "get_zone_info": false, 00:10:42.125 "zone_management": false, 00:10:42.125 "zone_append": false, 00:10:42.125 "compare": false, 00:10:42.125 "compare_and_write": false, 00:10:42.125 "abort": false, 00:10:42.125 "seek_hole": false, 00:10:42.125 "seek_data": false, 00:10:42.125 "copy": false, 00:10:42.125 "nvme_iov_md": false 00:10:42.125 }, 00:10:42.125 "memory_domains": [ 00:10:42.125 { 00:10:42.125 "dma_device_id": "system", 00:10:42.125 "dma_device_type": 1 00:10:42.125 }, 00:10:42.125 { 00:10:42.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.125 "dma_device_type": 2 00:10:42.125 }, 00:10:42.125 { 00:10:42.125 "dma_device_id": "system", 00:10:42.125 "dma_device_type": 1 00:10:42.125 }, 00:10:42.125 { 00:10:42.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.125 "dma_device_type": 2 00:10:42.125 }, 00:10:42.125 { 00:10:42.125 "dma_device_id": "system", 00:10:42.125 "dma_device_type": 1 00:10:42.125 }, 00:10:42.125 { 00:10:42.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.125 "dma_device_type": 2 00:10:42.125 }, 00:10:42.125 { 00:10:42.125 "dma_device_id": "system", 00:10:42.125 "dma_device_type": 1 00:10:42.125 }, 00:10:42.125 { 00:10:42.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.125 "dma_device_type": 2 00:10:42.125 } 00:10:42.125 ], 00:10:42.125 "driver_specific": { 00:10:42.125 "raid": { 00:10:42.125 "uuid": "8d3ad193-080e-4620-801c-033d380268ae", 00:10:42.125 "strip_size_kb": 0, 00:10:42.125 "state": "online", 00:10:42.125 "raid_level": "raid1", 00:10:42.125 "superblock": true, 00:10:42.125 "num_base_bdevs": 4, 00:10:42.125 "num_base_bdevs_discovered": 4, 00:10:42.125 "num_base_bdevs_operational": 4, 00:10:42.125 "base_bdevs_list": [ 00:10:42.125 { 00:10:42.125 "name": "pt1", 00:10:42.125 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:42.125 "is_configured": true, 00:10:42.125 "data_offset": 2048, 00:10:42.125 "data_size": 63488 00:10:42.125 }, 00:10:42.125 { 00:10:42.125 "name": "pt2", 00:10:42.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:42.125 "is_configured": true, 00:10:42.125 "data_offset": 2048, 00:10:42.125 "data_size": 63488 00:10:42.125 }, 00:10:42.125 { 00:10:42.125 "name": "pt3", 00:10:42.125 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:42.125 "is_configured": true, 00:10:42.125 "data_offset": 2048, 00:10:42.125 "data_size": 63488 00:10:42.125 }, 00:10:42.125 { 00:10:42.125 "name": "pt4", 00:10:42.125 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:42.125 "is_configured": true, 00:10:42.125 "data_offset": 2048, 00:10:42.125 "data_size": 63488 00:10:42.125 } 00:10:42.125 ] 00:10:42.125 } 00:10:42.125 } 00:10:42.125 }' 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:42.125 pt2 00:10:42.125 pt3 00:10:42.125 pt4' 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.125 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.385 [2024-10-15 01:11:54.891554] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8d3ad193-080e-4620-801c-033d380268ae 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8d3ad193-080e-4620-801c-033d380268ae ']' 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.385 [2024-10-15 01:11:54.939231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:42.385 [2024-10-15 01:11:54.939258] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.385 [2024-10-15 01:11:54.939332] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.385 [2024-10-15 01:11:54.939420] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.385 [2024-10-15 01:11:54.939437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.385 01:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.385 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.385 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:42.385 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.386 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.386 [2024-10-15 01:11:55.102948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:42.386 [2024-10-15 01:11:55.104808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:42.386 [2024-10-15 01:11:55.104862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:42.386 [2024-10-15 01:11:55.104892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:42.386 [2024-10-15 01:11:55.104937] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:42.386 [2024-10-15 01:11:55.104979] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:42.386 [2024-10-15 01:11:55.105017] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:42.386 [2024-10-15 01:11:55.105035] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:42.386 [2024-10-15 01:11:55.105050] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:42.386 [2024-10-15 01:11:55.105060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:10:42.386 request: 00:10:42.386 { 00:10:42.386 "name": "raid_bdev1", 00:10:42.386 "raid_level": "raid1", 00:10:42.386 "base_bdevs": [ 00:10:42.386 "malloc1", 00:10:42.386 "malloc2", 00:10:42.386 "malloc3", 00:10:42.386 "malloc4" 00:10:42.386 ], 00:10:42.386 "superblock": false, 00:10:42.646 "method": "bdev_raid_create", 00:10:42.646 "req_id": 1 00:10:42.646 } 00:10:42.646 Got JSON-RPC error response 00:10:42.646 response: 00:10:42.646 { 00:10:42.646 "code": -17, 00:10:42.646 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:42.646 } 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.646 [2024-10-15 01:11:55.154826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:42.646 [2024-10-15 01:11:55.154880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.646 [2024-10-15 01:11:55.154906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:42.646 [2024-10-15 01:11:55.154914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.646 [2024-10-15 01:11:55.157051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.646 [2024-10-15 01:11:55.157085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:42.646 [2024-10-15 01:11:55.157154] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:42.646 [2024-10-15 01:11:55.157201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:42.646 pt1 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.646 "name": "raid_bdev1", 00:10:42.646 "uuid": "8d3ad193-080e-4620-801c-033d380268ae", 00:10:42.646 "strip_size_kb": 0, 00:10:42.646 "state": "configuring", 00:10:42.646 "raid_level": "raid1", 00:10:42.646 "superblock": true, 00:10:42.646 "num_base_bdevs": 4, 00:10:42.646 "num_base_bdevs_discovered": 1, 00:10:42.646 "num_base_bdevs_operational": 4, 00:10:42.646 "base_bdevs_list": [ 00:10:42.646 { 00:10:42.646 "name": "pt1", 00:10:42.646 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:42.646 "is_configured": true, 00:10:42.646 "data_offset": 2048, 00:10:42.646 "data_size": 63488 00:10:42.646 }, 00:10:42.646 { 00:10:42.646 "name": null, 00:10:42.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:42.646 "is_configured": false, 00:10:42.646 "data_offset": 2048, 00:10:42.646 "data_size": 63488 00:10:42.646 }, 00:10:42.646 { 00:10:42.646 "name": null, 00:10:42.646 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:42.646 "is_configured": false, 00:10:42.646 "data_offset": 2048, 00:10:42.646 "data_size": 63488 00:10:42.646 }, 00:10:42.646 { 00:10:42.646 "name": null, 00:10:42.646 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:42.646 "is_configured": false, 00:10:42.646 "data_offset": 2048, 00:10:42.646 "data_size": 63488 00:10:42.646 } 00:10:42.646 ] 00:10:42.646 }' 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.646 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.906 [2024-10-15 01:11:55.594119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:42.906 [2024-10-15 01:11:55.594188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.906 [2024-10-15 01:11:55.594209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:42.906 [2024-10-15 01:11:55.594218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.906 [2024-10-15 01:11:55.594593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.906 [2024-10-15 01:11:55.594621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:42.906 [2024-10-15 01:11:55.594694] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:42.906 [2024-10-15 01:11:55.594719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:42.906 pt2 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.906 [2024-10-15 01:11:55.606131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.906 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.166 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.166 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.166 "name": "raid_bdev1", 00:10:43.166 "uuid": "8d3ad193-080e-4620-801c-033d380268ae", 00:10:43.166 "strip_size_kb": 0, 00:10:43.166 "state": "configuring", 00:10:43.166 "raid_level": "raid1", 00:10:43.166 "superblock": true, 00:10:43.166 "num_base_bdevs": 4, 00:10:43.166 "num_base_bdevs_discovered": 1, 00:10:43.166 "num_base_bdevs_operational": 4, 00:10:43.166 "base_bdevs_list": [ 00:10:43.166 { 00:10:43.166 "name": "pt1", 00:10:43.166 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.166 "is_configured": true, 00:10:43.166 "data_offset": 2048, 00:10:43.166 "data_size": 63488 00:10:43.166 }, 00:10:43.166 { 00:10:43.166 "name": null, 00:10:43.166 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.166 "is_configured": false, 00:10:43.166 "data_offset": 0, 00:10:43.166 "data_size": 63488 00:10:43.166 }, 00:10:43.166 { 00:10:43.166 "name": null, 00:10:43.166 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:43.166 "is_configured": false, 00:10:43.166 "data_offset": 2048, 00:10:43.166 "data_size": 63488 00:10:43.166 }, 00:10:43.166 { 00:10:43.166 "name": null, 00:10:43.166 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:43.166 "is_configured": false, 00:10:43.166 "data_offset": 2048, 00:10:43.166 "data_size": 63488 00:10:43.166 } 00:10:43.166 ] 00:10:43.166 }' 00:10:43.166 01:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.166 01:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.427 [2024-10-15 01:11:56.057358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:43.427 [2024-10-15 01:11:56.057444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.427 [2024-10-15 01:11:56.057465] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:43.427 [2024-10-15 01:11:56.057476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.427 [2024-10-15 01:11:56.057871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.427 [2024-10-15 01:11:56.057899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:43.427 [2024-10-15 01:11:56.057975] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:43.427 [2024-10-15 01:11:56.057997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:43.427 pt2 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.427 [2024-10-15 01:11:56.069303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:43.427 [2024-10-15 01:11:56.069369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.427 [2024-10-15 01:11:56.069387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:43.427 [2024-10-15 01:11:56.069397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.427 [2024-10-15 01:11:56.069776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.427 [2024-10-15 01:11:56.069801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:43.427 [2024-10-15 01:11:56.069863] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:43.427 [2024-10-15 01:11:56.069885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:43.427 pt3 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.427 [2024-10-15 01:11:56.081302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:43.427 [2024-10-15 01:11:56.081371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.427 [2024-10-15 01:11:56.081389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:43.427 [2024-10-15 01:11:56.081398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.427 [2024-10-15 01:11:56.081722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.427 [2024-10-15 01:11:56.081748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:43.427 [2024-10-15 01:11:56.081806] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:43.427 [2024-10-15 01:11:56.081827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:43.427 [2024-10-15 01:11:56.081932] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:43.427 [2024-10-15 01:11:56.081952] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:43.427 [2024-10-15 01:11:56.082171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:43.427 [2024-10-15 01:11:56.082331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:43.427 [2024-10-15 01:11:56.082345] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:10:43.427 [2024-10-15 01:11:56.082454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.427 pt4 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.427 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.427 "name": "raid_bdev1", 00:10:43.427 "uuid": "8d3ad193-080e-4620-801c-033d380268ae", 00:10:43.427 "strip_size_kb": 0, 00:10:43.427 "state": "online", 00:10:43.427 "raid_level": "raid1", 00:10:43.427 "superblock": true, 00:10:43.427 "num_base_bdevs": 4, 00:10:43.428 "num_base_bdevs_discovered": 4, 00:10:43.428 "num_base_bdevs_operational": 4, 00:10:43.428 "base_bdevs_list": [ 00:10:43.428 { 00:10:43.428 "name": "pt1", 00:10:43.428 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.428 "is_configured": true, 00:10:43.428 "data_offset": 2048, 00:10:43.428 "data_size": 63488 00:10:43.428 }, 00:10:43.428 { 00:10:43.428 "name": "pt2", 00:10:43.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.428 "is_configured": true, 00:10:43.428 "data_offset": 2048, 00:10:43.428 "data_size": 63488 00:10:43.428 }, 00:10:43.428 { 00:10:43.428 "name": "pt3", 00:10:43.428 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:43.428 "is_configured": true, 00:10:43.428 "data_offset": 2048, 00:10:43.428 "data_size": 63488 00:10:43.428 }, 00:10:43.428 { 00:10:43.428 "name": "pt4", 00:10:43.428 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:43.428 "is_configured": true, 00:10:43.428 "data_offset": 2048, 00:10:43.428 "data_size": 63488 00:10:43.428 } 00:10:43.428 ] 00:10:43.428 }' 00:10:43.428 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.428 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.998 [2024-10-15 01:11:56.556818] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:43.998 "name": "raid_bdev1", 00:10:43.998 "aliases": [ 00:10:43.998 "8d3ad193-080e-4620-801c-033d380268ae" 00:10:43.998 ], 00:10:43.998 "product_name": "Raid Volume", 00:10:43.998 "block_size": 512, 00:10:43.998 "num_blocks": 63488, 00:10:43.998 "uuid": "8d3ad193-080e-4620-801c-033d380268ae", 00:10:43.998 "assigned_rate_limits": { 00:10:43.998 "rw_ios_per_sec": 0, 00:10:43.998 "rw_mbytes_per_sec": 0, 00:10:43.998 "r_mbytes_per_sec": 0, 00:10:43.998 "w_mbytes_per_sec": 0 00:10:43.998 }, 00:10:43.998 "claimed": false, 00:10:43.998 "zoned": false, 00:10:43.998 "supported_io_types": { 00:10:43.998 "read": true, 00:10:43.998 "write": true, 00:10:43.998 "unmap": false, 00:10:43.998 "flush": false, 00:10:43.998 "reset": true, 00:10:43.998 "nvme_admin": false, 00:10:43.998 "nvme_io": false, 00:10:43.998 "nvme_io_md": false, 00:10:43.998 "write_zeroes": true, 00:10:43.998 "zcopy": false, 00:10:43.998 "get_zone_info": false, 00:10:43.998 "zone_management": false, 00:10:43.998 "zone_append": false, 00:10:43.998 "compare": false, 00:10:43.998 "compare_and_write": false, 00:10:43.998 "abort": false, 00:10:43.998 "seek_hole": false, 00:10:43.998 "seek_data": false, 00:10:43.998 "copy": false, 00:10:43.998 "nvme_iov_md": false 00:10:43.998 }, 00:10:43.998 "memory_domains": [ 00:10:43.998 { 00:10:43.998 "dma_device_id": "system", 00:10:43.998 "dma_device_type": 1 00:10:43.998 }, 00:10:43.998 { 00:10:43.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.998 "dma_device_type": 2 00:10:43.998 }, 00:10:43.998 { 00:10:43.998 "dma_device_id": "system", 00:10:43.998 "dma_device_type": 1 00:10:43.998 }, 00:10:43.998 { 00:10:43.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.998 "dma_device_type": 2 00:10:43.998 }, 00:10:43.998 { 00:10:43.998 "dma_device_id": "system", 00:10:43.998 "dma_device_type": 1 00:10:43.998 }, 00:10:43.998 { 00:10:43.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.998 "dma_device_type": 2 00:10:43.998 }, 00:10:43.998 { 00:10:43.998 "dma_device_id": "system", 00:10:43.998 "dma_device_type": 1 00:10:43.998 }, 00:10:43.998 { 00:10:43.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.998 "dma_device_type": 2 00:10:43.998 } 00:10:43.998 ], 00:10:43.998 "driver_specific": { 00:10:43.998 "raid": { 00:10:43.998 "uuid": "8d3ad193-080e-4620-801c-033d380268ae", 00:10:43.998 "strip_size_kb": 0, 00:10:43.998 "state": "online", 00:10:43.998 "raid_level": "raid1", 00:10:43.998 "superblock": true, 00:10:43.998 "num_base_bdevs": 4, 00:10:43.998 "num_base_bdevs_discovered": 4, 00:10:43.998 "num_base_bdevs_operational": 4, 00:10:43.998 "base_bdevs_list": [ 00:10:43.998 { 00:10:43.998 "name": "pt1", 00:10:43.998 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.998 "is_configured": true, 00:10:43.998 "data_offset": 2048, 00:10:43.998 "data_size": 63488 00:10:43.998 }, 00:10:43.998 { 00:10:43.998 "name": "pt2", 00:10:43.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.998 "is_configured": true, 00:10:43.998 "data_offset": 2048, 00:10:43.998 "data_size": 63488 00:10:43.998 }, 00:10:43.998 { 00:10:43.998 "name": "pt3", 00:10:43.998 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:43.998 "is_configured": true, 00:10:43.998 "data_offset": 2048, 00:10:43.998 "data_size": 63488 00:10:43.998 }, 00:10:43.998 { 00:10:43.998 "name": "pt4", 00:10:43.998 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:43.998 "is_configured": true, 00:10:43.998 "data_offset": 2048, 00:10:43.998 "data_size": 63488 00:10:43.998 } 00:10:43.998 ] 00:10:43.998 } 00:10:43.998 } 00:10:43.998 }' 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:43.998 pt2 00:10:43.998 pt3 00:10:43.998 pt4' 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.998 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.999 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.999 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.999 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:43.999 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.999 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.999 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:44.259 [2024-10-15 01:11:56.856333] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8d3ad193-080e-4620-801c-033d380268ae '!=' 8d3ad193-080e-4620-801c-033d380268ae ']' 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.259 [2024-10-15 01:11:56.903947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.259 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.259 "name": "raid_bdev1", 00:10:44.259 "uuid": "8d3ad193-080e-4620-801c-033d380268ae", 00:10:44.259 "strip_size_kb": 0, 00:10:44.259 "state": "online", 00:10:44.259 "raid_level": "raid1", 00:10:44.259 "superblock": true, 00:10:44.259 "num_base_bdevs": 4, 00:10:44.259 "num_base_bdevs_discovered": 3, 00:10:44.259 "num_base_bdevs_operational": 3, 00:10:44.259 "base_bdevs_list": [ 00:10:44.259 { 00:10:44.259 "name": null, 00:10:44.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.259 "is_configured": false, 00:10:44.259 "data_offset": 0, 00:10:44.259 "data_size": 63488 00:10:44.259 }, 00:10:44.259 { 00:10:44.259 "name": "pt2", 00:10:44.259 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.259 "is_configured": true, 00:10:44.259 "data_offset": 2048, 00:10:44.259 "data_size": 63488 00:10:44.259 }, 00:10:44.259 { 00:10:44.259 "name": "pt3", 00:10:44.259 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.259 "is_configured": true, 00:10:44.259 "data_offset": 2048, 00:10:44.260 "data_size": 63488 00:10:44.260 }, 00:10:44.260 { 00:10:44.260 "name": "pt4", 00:10:44.260 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:44.260 "is_configured": true, 00:10:44.260 "data_offset": 2048, 00:10:44.260 "data_size": 63488 00:10:44.260 } 00:10:44.260 ] 00:10:44.260 }' 00:10:44.260 01:11:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.260 01:11:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.830 [2024-10-15 01:11:57.327233] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:44.830 [2024-10-15 01:11:57.327262] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:44.830 [2024-10-15 01:11:57.327340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.830 [2024-10-15 01:11:57.327413] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:44.830 [2024-10-15 01:11:57.327424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.830 [2024-10-15 01:11:57.419034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:44.830 [2024-10-15 01:11:57.419084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.830 [2024-10-15 01:11:57.419116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:44.830 [2024-10-15 01:11:57.419127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.830 [2024-10-15 01:11:57.421282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.830 [2024-10-15 01:11:57.421319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:44.830 [2024-10-15 01:11:57.421385] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:44.830 [2024-10-15 01:11:57.421419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:44.830 pt2 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.830 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.830 "name": "raid_bdev1", 00:10:44.830 "uuid": "8d3ad193-080e-4620-801c-033d380268ae", 00:10:44.830 "strip_size_kb": 0, 00:10:44.830 "state": "configuring", 00:10:44.830 "raid_level": "raid1", 00:10:44.830 "superblock": true, 00:10:44.830 "num_base_bdevs": 4, 00:10:44.830 "num_base_bdevs_discovered": 1, 00:10:44.830 "num_base_bdevs_operational": 3, 00:10:44.830 "base_bdevs_list": [ 00:10:44.830 { 00:10:44.830 "name": null, 00:10:44.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.830 "is_configured": false, 00:10:44.830 "data_offset": 2048, 00:10:44.830 "data_size": 63488 00:10:44.830 }, 00:10:44.830 { 00:10:44.830 "name": "pt2", 00:10:44.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.830 "is_configured": true, 00:10:44.830 "data_offset": 2048, 00:10:44.830 "data_size": 63488 00:10:44.830 }, 00:10:44.830 { 00:10:44.830 "name": null, 00:10:44.830 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.830 "is_configured": false, 00:10:44.830 "data_offset": 2048, 00:10:44.831 "data_size": 63488 00:10:44.831 }, 00:10:44.831 { 00:10:44.831 "name": null, 00:10:44.831 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:44.831 "is_configured": false, 00:10:44.831 "data_offset": 2048, 00:10:44.831 "data_size": 63488 00:10:44.831 } 00:10:44.831 ] 00:10:44.831 }' 00:10:44.831 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.831 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.401 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:45.401 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:45.401 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:45.401 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.401 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.401 [2024-10-15 01:11:57.854342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:45.401 [2024-10-15 01:11:57.854410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.401 [2024-10-15 01:11:57.854431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:45.401 [2024-10-15 01:11:57.854443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.401 [2024-10-15 01:11:57.854864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.401 [2024-10-15 01:11:57.854898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:45.401 [2024-10-15 01:11:57.854977] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:45.401 [2024-10-15 01:11:57.855016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:45.401 pt3 00:10:45.401 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.401 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:45.401 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.401 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.401 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.401 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.401 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.402 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.402 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.402 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.402 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.402 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.402 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.402 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.402 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.402 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.402 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.402 "name": "raid_bdev1", 00:10:45.402 "uuid": "8d3ad193-080e-4620-801c-033d380268ae", 00:10:45.402 "strip_size_kb": 0, 00:10:45.402 "state": "configuring", 00:10:45.402 "raid_level": "raid1", 00:10:45.402 "superblock": true, 00:10:45.402 "num_base_bdevs": 4, 00:10:45.402 "num_base_bdevs_discovered": 2, 00:10:45.402 "num_base_bdevs_operational": 3, 00:10:45.402 "base_bdevs_list": [ 00:10:45.402 { 00:10:45.402 "name": null, 00:10:45.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.402 "is_configured": false, 00:10:45.402 "data_offset": 2048, 00:10:45.402 "data_size": 63488 00:10:45.402 }, 00:10:45.402 { 00:10:45.402 "name": "pt2", 00:10:45.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.402 "is_configured": true, 00:10:45.402 "data_offset": 2048, 00:10:45.402 "data_size": 63488 00:10:45.402 }, 00:10:45.402 { 00:10:45.402 "name": "pt3", 00:10:45.402 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.402 "is_configured": true, 00:10:45.402 "data_offset": 2048, 00:10:45.402 "data_size": 63488 00:10:45.402 }, 00:10:45.402 { 00:10:45.402 "name": null, 00:10:45.402 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:45.402 "is_configured": false, 00:10:45.402 "data_offset": 2048, 00:10:45.402 "data_size": 63488 00:10:45.402 } 00:10:45.402 ] 00:10:45.402 }' 00:10:45.402 01:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.402 01:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.663 [2024-10-15 01:11:58.249644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:45.663 [2024-10-15 01:11:58.249710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.663 [2024-10-15 01:11:58.249733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:45.663 [2024-10-15 01:11:58.249747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.663 [2024-10-15 01:11:58.250201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.663 [2024-10-15 01:11:58.250233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:45.663 [2024-10-15 01:11:58.250312] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:45.663 [2024-10-15 01:11:58.250341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:45.663 [2024-10-15 01:11:58.250444] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:45.663 [2024-10-15 01:11:58.250460] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:45.663 [2024-10-15 01:11:58.250697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:45.663 [2024-10-15 01:11:58.250831] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:45.663 [2024-10-15 01:11:58.250846] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:10:45.663 [2024-10-15 01:11:58.250958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.663 pt4 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.663 "name": "raid_bdev1", 00:10:45.663 "uuid": "8d3ad193-080e-4620-801c-033d380268ae", 00:10:45.663 "strip_size_kb": 0, 00:10:45.663 "state": "online", 00:10:45.663 "raid_level": "raid1", 00:10:45.663 "superblock": true, 00:10:45.663 "num_base_bdevs": 4, 00:10:45.663 "num_base_bdevs_discovered": 3, 00:10:45.663 "num_base_bdevs_operational": 3, 00:10:45.663 "base_bdevs_list": [ 00:10:45.663 { 00:10:45.663 "name": null, 00:10:45.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.663 "is_configured": false, 00:10:45.663 "data_offset": 2048, 00:10:45.663 "data_size": 63488 00:10:45.663 }, 00:10:45.663 { 00:10:45.663 "name": "pt2", 00:10:45.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.663 "is_configured": true, 00:10:45.663 "data_offset": 2048, 00:10:45.663 "data_size": 63488 00:10:45.663 }, 00:10:45.663 { 00:10:45.663 "name": "pt3", 00:10:45.663 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.663 "is_configured": true, 00:10:45.663 "data_offset": 2048, 00:10:45.663 "data_size": 63488 00:10:45.663 }, 00:10:45.663 { 00:10:45.663 "name": "pt4", 00:10:45.663 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:45.663 "is_configured": true, 00:10:45.663 "data_offset": 2048, 00:10:45.663 "data_size": 63488 00:10:45.663 } 00:10:45.663 ] 00:10:45.663 }' 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.663 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.232 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:46.232 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.232 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.232 [2024-10-15 01:11:58.696865] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:46.232 [2024-10-15 01:11:58.696899] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.232 [2024-10-15 01:11:58.696978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.232 [2024-10-15 01:11:58.697053] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.232 [2024-10-15 01:11:58.697063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:10:46.232 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.232 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.232 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.232 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:46.232 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.232 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.232 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:46.232 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:46.232 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:10:46.232 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:10:46.232 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:10:46.232 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.232 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.232 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.232 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:46.232 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.232 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.232 [2024-10-15 01:11:58.768728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:46.232 [2024-10-15 01:11:58.768788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.232 [2024-10-15 01:11:58.768812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:46.232 [2024-10-15 01:11:58.768821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.232 [2024-10-15 01:11:58.771105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.232 [2024-10-15 01:11:58.771143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:46.232 [2024-10-15 01:11:58.771231] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:46.232 [2024-10-15 01:11:58.771275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:46.232 [2024-10-15 01:11:58.771394] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:46.232 [2024-10-15 01:11:58.771407] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:46.232 [2024-10-15 01:11:58.771424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:10:46.232 [2024-10-15 01:11:58.771466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:46.232 [2024-10-15 01:11:58.771563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:46.232 pt1 00:10:46.233 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.233 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:10:46.233 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:46.233 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.233 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.233 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.233 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.233 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.233 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.233 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.233 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.233 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.233 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.233 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.233 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.233 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.233 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.233 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.233 "name": "raid_bdev1", 00:10:46.233 "uuid": "8d3ad193-080e-4620-801c-033d380268ae", 00:10:46.233 "strip_size_kb": 0, 00:10:46.233 "state": "configuring", 00:10:46.233 "raid_level": "raid1", 00:10:46.233 "superblock": true, 00:10:46.233 "num_base_bdevs": 4, 00:10:46.233 "num_base_bdevs_discovered": 2, 00:10:46.233 "num_base_bdevs_operational": 3, 00:10:46.233 "base_bdevs_list": [ 00:10:46.233 { 00:10:46.233 "name": null, 00:10:46.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.233 "is_configured": false, 00:10:46.233 "data_offset": 2048, 00:10:46.233 "data_size": 63488 00:10:46.233 }, 00:10:46.233 { 00:10:46.233 "name": "pt2", 00:10:46.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.233 "is_configured": true, 00:10:46.233 "data_offset": 2048, 00:10:46.233 "data_size": 63488 00:10:46.233 }, 00:10:46.233 { 00:10:46.233 "name": "pt3", 00:10:46.233 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.233 "is_configured": true, 00:10:46.233 "data_offset": 2048, 00:10:46.233 "data_size": 63488 00:10:46.233 }, 00:10:46.233 { 00:10:46.233 "name": null, 00:10:46.233 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:46.233 "is_configured": false, 00:10:46.233 "data_offset": 2048, 00:10:46.233 "data_size": 63488 00:10:46.233 } 00:10:46.233 ] 00:10:46.233 }' 00:10:46.233 01:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.233 01:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.493 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:46.493 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:46.493 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.493 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.753 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.753 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:46.753 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:46.753 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.753 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.753 [2024-10-15 01:11:59.251921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:46.753 [2024-10-15 01:11:59.251999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.753 [2024-10-15 01:11:59.252022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:10:46.753 [2024-10-15 01:11:59.252033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.753 [2024-10-15 01:11:59.252446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.753 [2024-10-15 01:11:59.252474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:46.754 [2024-10-15 01:11:59.252550] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:46.754 [2024-10-15 01:11:59.252573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:46.754 [2024-10-15 01:11:59.252672] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:10:46.754 [2024-10-15 01:11:59.252694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:46.754 [2024-10-15 01:11:59.252939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:10:46.754 [2024-10-15 01:11:59.253065] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:10:46.754 [2024-10-15 01:11:59.253084] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:10:46.754 [2024-10-15 01:11:59.253209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.754 pt4 00:10:46.754 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.754 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:46.754 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.754 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.754 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.754 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.754 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.754 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.754 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.754 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.754 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.754 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.754 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.754 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.754 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.754 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.754 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.754 "name": "raid_bdev1", 00:10:46.754 "uuid": "8d3ad193-080e-4620-801c-033d380268ae", 00:10:46.754 "strip_size_kb": 0, 00:10:46.754 "state": "online", 00:10:46.754 "raid_level": "raid1", 00:10:46.754 "superblock": true, 00:10:46.754 "num_base_bdevs": 4, 00:10:46.754 "num_base_bdevs_discovered": 3, 00:10:46.754 "num_base_bdevs_operational": 3, 00:10:46.754 "base_bdevs_list": [ 00:10:46.754 { 00:10:46.754 "name": null, 00:10:46.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.754 "is_configured": false, 00:10:46.754 "data_offset": 2048, 00:10:46.754 "data_size": 63488 00:10:46.754 }, 00:10:46.754 { 00:10:46.754 "name": "pt2", 00:10:46.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.754 "is_configured": true, 00:10:46.754 "data_offset": 2048, 00:10:46.754 "data_size": 63488 00:10:46.754 }, 00:10:46.754 { 00:10:46.754 "name": "pt3", 00:10:46.754 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.754 "is_configured": true, 00:10:46.754 "data_offset": 2048, 00:10:46.754 "data_size": 63488 00:10:46.754 }, 00:10:46.754 { 00:10:46.754 "name": "pt4", 00:10:46.754 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:46.754 "is_configured": true, 00:10:46.754 "data_offset": 2048, 00:10:46.754 "data_size": 63488 00:10:46.754 } 00:10:46.754 ] 00:10:46.754 }' 00:10:46.754 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.754 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.014 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:47.014 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:47.014 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.014 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.274 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.274 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:47.274 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:47.274 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.274 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.274 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:47.274 [2024-10-15 01:11:59.783395] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.274 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.274 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8d3ad193-080e-4620-801c-033d380268ae '!=' 8d3ad193-080e-4620-801c-033d380268ae ']' 00:10:47.274 01:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85010 00:10:47.274 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 85010 ']' 00:10:47.274 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 85010 00:10:47.274 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:47.275 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:47.275 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85010 00:10:47.275 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:47.275 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:47.275 killing process with pid 85010 00:10:47.275 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85010' 00:10:47.275 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 85010 00:10:47.275 [2024-10-15 01:11:59.861668] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:47.275 [2024-10-15 01:11:59.861769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.275 01:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 85010 00:10:47.275 [2024-10-15 01:11:59.861852] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:47.275 [2024-10-15 01:11:59.861871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:10:47.275 [2024-10-15 01:11:59.906305] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:47.534 01:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:47.534 00:10:47.534 real 0m7.040s 00:10:47.534 user 0m11.902s 00:10:47.534 sys 0m1.485s 00:10:47.534 01:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:47.534 01:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.534 ************************************ 00:10:47.534 END TEST raid_superblock_test 00:10:47.534 ************************************ 00:10:47.534 01:12:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:10:47.534 01:12:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:47.534 01:12:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:47.534 01:12:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.534 ************************************ 00:10:47.534 START TEST raid_read_error_test 00:10:47.534 ************************************ 00:10:47.534 01:12:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:10:47.534 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:47.534 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:47.534 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:47.534 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:47.534 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.534 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:47.534 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.534 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.534 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:47.534 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.534 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.534 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6jT9XT6ZZB 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85485 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85485 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 85485 ']' 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:47.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:47.535 01:12:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.794 [2024-10-15 01:12:00.285891] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:10:47.795 [2024-10-15 01:12:00.286017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85485 ] 00:10:47.795 [2024-10-15 01:12:00.427912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.795 [2024-10-15 01:12:00.456067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.795 [2024-10-15 01:12:00.499181] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.795 [2024-10-15 01:12:00.499227] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.736 BaseBdev1_malloc 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.736 true 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.736 [2024-10-15 01:12:01.153997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:48.736 [2024-10-15 01:12:01.154054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.736 [2024-10-15 01:12:01.154090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:48.736 [2024-10-15 01:12:01.154099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.736 [2024-10-15 01:12:01.156230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.736 [2024-10-15 01:12:01.156265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:48.736 BaseBdev1 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.736 BaseBdev2_malloc 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.736 true 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.736 [2024-10-15 01:12:01.194597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:48.736 [2024-10-15 01:12:01.194664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.736 [2024-10-15 01:12:01.194683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:48.736 [2024-10-15 01:12:01.194699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.736 [2024-10-15 01:12:01.196786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.736 [2024-10-15 01:12:01.196821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:48.736 BaseBdev2 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.736 BaseBdev3_malloc 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.736 true 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.736 [2024-10-15 01:12:01.235235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:48.736 [2024-10-15 01:12:01.235288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.736 [2024-10-15 01:12:01.235311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:48.736 [2024-10-15 01:12:01.235320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.736 [2024-10-15 01:12:01.237390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.736 [2024-10-15 01:12:01.237423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:48.736 BaseBdev3 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.736 BaseBdev4_malloc 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.736 true 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.736 [2024-10-15 01:12:01.286457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:48.736 [2024-10-15 01:12:01.286509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.736 [2024-10-15 01:12:01.286531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:48.736 [2024-10-15 01:12:01.286539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.736 [2024-10-15 01:12:01.288699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.736 [2024-10-15 01:12:01.288740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:48.736 BaseBdev4 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.736 [2024-10-15 01:12:01.298489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.736 [2024-10-15 01:12:01.300378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:48.736 [2024-10-15 01:12:01.300458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:48.736 [2024-10-15 01:12:01.300524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:48.736 [2024-10-15 01:12:01.300751] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:48.736 [2024-10-15 01:12:01.300777] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:48.736 [2024-10-15 01:12:01.301063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:48.736 [2024-10-15 01:12:01.301243] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:48.736 [2024-10-15 01:12:01.301261] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:48.736 [2024-10-15 01:12:01.301389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.736 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.737 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.737 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.737 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.737 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.737 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.737 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.737 "name": "raid_bdev1", 00:10:48.737 "uuid": "8397b549-3160-4f8e-840b-b99eba35e9bd", 00:10:48.737 "strip_size_kb": 0, 00:10:48.737 "state": "online", 00:10:48.737 "raid_level": "raid1", 00:10:48.737 "superblock": true, 00:10:48.737 "num_base_bdevs": 4, 00:10:48.737 "num_base_bdevs_discovered": 4, 00:10:48.737 "num_base_bdevs_operational": 4, 00:10:48.737 "base_bdevs_list": [ 00:10:48.737 { 00:10:48.737 "name": "BaseBdev1", 00:10:48.737 "uuid": "0f889130-71b8-5271-8d47-4c3b093f83fb", 00:10:48.737 "is_configured": true, 00:10:48.737 "data_offset": 2048, 00:10:48.737 "data_size": 63488 00:10:48.737 }, 00:10:48.737 { 00:10:48.737 "name": "BaseBdev2", 00:10:48.737 "uuid": "9eb8640c-b4ff-5603-9d41-e64e1a523caa", 00:10:48.737 "is_configured": true, 00:10:48.737 "data_offset": 2048, 00:10:48.737 "data_size": 63488 00:10:48.737 }, 00:10:48.737 { 00:10:48.737 "name": "BaseBdev3", 00:10:48.737 "uuid": "fe9cfb44-c750-5bf0-9875-7e673a0709ed", 00:10:48.737 "is_configured": true, 00:10:48.737 "data_offset": 2048, 00:10:48.737 "data_size": 63488 00:10:48.737 }, 00:10:48.737 { 00:10:48.737 "name": "BaseBdev4", 00:10:48.737 "uuid": "9d3fc0b7-735a-55aa-a443-96112f381bec", 00:10:48.737 "is_configured": true, 00:10:48.737 "data_offset": 2048, 00:10:48.737 "data_size": 63488 00:10:48.737 } 00:10:48.737 ] 00:10:48.737 }' 00:10:48.737 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.737 01:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.307 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:49.307 01:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:49.307 [2024-10-15 01:12:01.841957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.247 01:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.247 "name": "raid_bdev1", 00:10:50.247 "uuid": "8397b549-3160-4f8e-840b-b99eba35e9bd", 00:10:50.247 "strip_size_kb": 0, 00:10:50.247 "state": "online", 00:10:50.247 "raid_level": "raid1", 00:10:50.247 "superblock": true, 00:10:50.247 "num_base_bdevs": 4, 00:10:50.247 "num_base_bdevs_discovered": 4, 00:10:50.247 "num_base_bdevs_operational": 4, 00:10:50.247 "base_bdevs_list": [ 00:10:50.247 { 00:10:50.247 "name": "BaseBdev1", 00:10:50.247 "uuid": "0f889130-71b8-5271-8d47-4c3b093f83fb", 00:10:50.248 "is_configured": true, 00:10:50.248 "data_offset": 2048, 00:10:50.248 "data_size": 63488 00:10:50.248 }, 00:10:50.248 { 00:10:50.248 "name": "BaseBdev2", 00:10:50.248 "uuid": "9eb8640c-b4ff-5603-9d41-e64e1a523caa", 00:10:50.248 "is_configured": true, 00:10:50.248 "data_offset": 2048, 00:10:50.248 "data_size": 63488 00:10:50.248 }, 00:10:50.248 { 00:10:50.248 "name": "BaseBdev3", 00:10:50.248 "uuid": "fe9cfb44-c750-5bf0-9875-7e673a0709ed", 00:10:50.248 "is_configured": true, 00:10:50.248 "data_offset": 2048, 00:10:50.248 "data_size": 63488 00:10:50.248 }, 00:10:50.248 { 00:10:50.248 "name": "BaseBdev4", 00:10:50.248 "uuid": "9d3fc0b7-735a-55aa-a443-96112f381bec", 00:10:50.248 "is_configured": true, 00:10:50.248 "data_offset": 2048, 00:10:50.248 "data_size": 63488 00:10:50.248 } 00:10:50.248 ] 00:10:50.248 }' 00:10:50.248 01:12:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.248 01:12:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.507 01:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:50.507 01:12:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.507 01:12:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.507 [2024-10-15 01:12:03.189410] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:50.507 [2024-10-15 01:12:03.189444] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.507 [2024-10-15 01:12:03.192133] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.507 [2024-10-15 01:12:03.192213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.507 [2024-10-15 01:12:03.192341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.507 [2024-10-15 01:12:03.192352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:50.507 { 00:10:50.507 "results": [ 00:10:50.507 { 00:10:50.507 "job": "raid_bdev1", 00:10:50.507 "core_mask": "0x1", 00:10:50.507 "workload": "randrw", 00:10:50.507 "percentage": 50, 00:10:50.507 "status": "finished", 00:10:50.507 "queue_depth": 1, 00:10:50.507 "io_size": 131072, 00:10:50.507 "runtime": 1.348248, 00:10:50.507 "iops": 11357.702737181884, 00:10:50.507 "mibps": 1419.7128421477355, 00:10:50.507 "io_failed": 0, 00:10:50.507 "io_timeout": 0, 00:10:50.507 "avg_latency_us": 85.43282018845761, 00:10:50.507 "min_latency_us": 22.246288209606988, 00:10:50.507 "max_latency_us": 1380.8349344978167 00:10:50.507 } 00:10:50.507 ], 00:10:50.507 "core_count": 1 00:10:50.507 } 00:10:50.507 01:12:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.507 01:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85485 00:10:50.507 01:12:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 85485 ']' 00:10:50.507 01:12:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 85485 00:10:50.507 01:12:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:50.507 01:12:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:50.507 01:12:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85485 00:10:50.508 01:12:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:50.508 01:12:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:50.508 killing process with pid 85485 00:10:50.508 01:12:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85485' 00:10:50.508 01:12:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 85485 00:10:50.508 [2024-10-15 01:12:03.225299] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:50.508 01:12:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 85485 00:10:50.767 [2024-10-15 01:12:03.261393] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.767 01:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6jT9XT6ZZB 00:10:50.767 01:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:50.767 01:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:50.767 01:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:50.767 01:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:50.767 01:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:50.767 01:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:50.767 01:12:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:50.767 00:10:50.767 real 0m3.288s 00:10:50.768 user 0m4.163s 00:10:50.768 sys 0m0.526s 00:10:50.768 01:12:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.768 01:12:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.768 ************************************ 00:10:50.768 END TEST raid_read_error_test 00:10:50.768 ************************************ 00:10:51.058 01:12:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:10:51.058 01:12:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:51.058 01:12:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:51.058 01:12:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:51.058 ************************************ 00:10:51.058 START TEST raid_write_error_test 00:10:51.058 ************************************ 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.c6OUcDggy1 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85615 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85615 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 85615 ']' 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:51.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:51.058 01:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.058 [2024-10-15 01:12:03.640957] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:10:51.058 [2024-10-15 01:12:03.641078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85615 ] 00:10:51.330 [2024-10-15 01:12:03.786309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.330 [2024-10-15 01:12:03.816323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.330 [2024-10-15 01:12:03.859531] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.331 [2024-10-15 01:12:03.859564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.901 BaseBdev1_malloc 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.901 true 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.901 [2024-10-15 01:12:04.514422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:51.901 [2024-10-15 01:12:04.514517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.901 [2024-10-15 01:12:04.514564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:51.901 [2024-10-15 01:12:04.514573] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.901 [2024-10-15 01:12:04.516807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.901 [2024-10-15 01:12:04.516856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:51.901 BaseBdev1 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.901 BaseBdev2_malloc 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.901 true 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.901 [2024-10-15 01:12:04.554982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:51.901 [2024-10-15 01:12:04.555028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.901 [2024-10-15 01:12:04.555062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:51.901 [2024-10-15 01:12:04.555079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.901 [2024-10-15 01:12:04.557148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.901 [2024-10-15 01:12:04.557194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:51.901 BaseBdev2 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.901 BaseBdev3_malloc 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.901 true 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.901 [2024-10-15 01:12:04.595805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:51.901 [2024-10-15 01:12:04.595859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.901 [2024-10-15 01:12:04.595900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:51.901 [2024-10-15 01:12:04.595908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.901 [2024-10-15 01:12:04.598021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.901 [2024-10-15 01:12:04.598056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:51.901 BaseBdev3 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.901 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.162 BaseBdev4_malloc 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.162 true 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.162 [2024-10-15 01:12:04.647928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:52.162 [2024-10-15 01:12:04.647978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.162 [2024-10-15 01:12:04.648002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:52.162 [2024-10-15 01:12:04.648011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.162 [2024-10-15 01:12:04.650131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.162 [2024-10-15 01:12:04.650168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:52.162 BaseBdev4 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.162 [2024-10-15 01:12:04.659950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.162 [2024-10-15 01:12:04.661813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.162 [2024-10-15 01:12:04.661885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:52.162 [2024-10-15 01:12:04.661947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:52.162 [2024-10-15 01:12:04.662145] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:52.162 [2024-10-15 01:12:04.662156] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:52.162 [2024-10-15 01:12:04.662419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:52.162 [2024-10-15 01:12:04.662597] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:52.162 [2024-10-15 01:12:04.662610] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:52.162 [2024-10-15 01:12:04.662737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.162 "name": "raid_bdev1", 00:10:52.162 "uuid": "9c9d1a98-4240-4d13-bbc0-73fde54902c1", 00:10:52.162 "strip_size_kb": 0, 00:10:52.162 "state": "online", 00:10:52.162 "raid_level": "raid1", 00:10:52.162 "superblock": true, 00:10:52.162 "num_base_bdevs": 4, 00:10:52.162 "num_base_bdevs_discovered": 4, 00:10:52.162 "num_base_bdevs_operational": 4, 00:10:52.162 "base_bdevs_list": [ 00:10:52.162 { 00:10:52.162 "name": "BaseBdev1", 00:10:52.162 "uuid": "e317f0cd-8468-5a7d-948d-7828d644563a", 00:10:52.162 "is_configured": true, 00:10:52.162 "data_offset": 2048, 00:10:52.162 "data_size": 63488 00:10:52.162 }, 00:10:52.162 { 00:10:52.162 "name": "BaseBdev2", 00:10:52.162 "uuid": "b88c322d-3372-5eef-8986-77ed31e5e9be", 00:10:52.162 "is_configured": true, 00:10:52.162 "data_offset": 2048, 00:10:52.162 "data_size": 63488 00:10:52.162 }, 00:10:52.162 { 00:10:52.162 "name": "BaseBdev3", 00:10:52.162 "uuid": "cd032318-af0a-5a73-9c74-7209d4ff2da4", 00:10:52.162 "is_configured": true, 00:10:52.162 "data_offset": 2048, 00:10:52.162 "data_size": 63488 00:10:52.162 }, 00:10:52.162 { 00:10:52.162 "name": "BaseBdev4", 00:10:52.162 "uuid": "ca4a8496-0266-560c-adcb-9ccb4bf2790c", 00:10:52.162 "is_configured": true, 00:10:52.162 "data_offset": 2048, 00:10:52.162 "data_size": 63488 00:10:52.162 } 00:10:52.162 ] 00:10:52.162 }' 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.162 01:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.422 01:12:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:52.422 01:12:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:52.682 [2024-10-15 01:12:05.211398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:53.620 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:53.620 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.620 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.620 [2024-10-15 01:12:06.126156] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:53.620 [2024-10-15 01:12:06.126240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:53.620 [2024-10-15 01:12:06.126491] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:10:53.620 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.620 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:53.620 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:53.620 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:53.620 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:53.620 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:53.620 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.620 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.620 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.620 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.620 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.620 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.620 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.620 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.620 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.620 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.621 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.621 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.621 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.621 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.621 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.621 "name": "raid_bdev1", 00:10:53.621 "uuid": "9c9d1a98-4240-4d13-bbc0-73fde54902c1", 00:10:53.621 "strip_size_kb": 0, 00:10:53.621 "state": "online", 00:10:53.621 "raid_level": "raid1", 00:10:53.621 "superblock": true, 00:10:53.621 "num_base_bdevs": 4, 00:10:53.621 "num_base_bdevs_discovered": 3, 00:10:53.621 "num_base_bdevs_operational": 3, 00:10:53.621 "base_bdevs_list": [ 00:10:53.621 { 00:10:53.621 "name": null, 00:10:53.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.621 "is_configured": false, 00:10:53.621 "data_offset": 0, 00:10:53.621 "data_size": 63488 00:10:53.621 }, 00:10:53.621 { 00:10:53.621 "name": "BaseBdev2", 00:10:53.621 "uuid": "b88c322d-3372-5eef-8986-77ed31e5e9be", 00:10:53.621 "is_configured": true, 00:10:53.621 "data_offset": 2048, 00:10:53.621 "data_size": 63488 00:10:53.621 }, 00:10:53.621 { 00:10:53.621 "name": "BaseBdev3", 00:10:53.621 "uuid": "cd032318-af0a-5a73-9c74-7209d4ff2da4", 00:10:53.621 "is_configured": true, 00:10:53.621 "data_offset": 2048, 00:10:53.621 "data_size": 63488 00:10:53.621 }, 00:10:53.621 { 00:10:53.621 "name": "BaseBdev4", 00:10:53.621 "uuid": "ca4a8496-0266-560c-adcb-9ccb4bf2790c", 00:10:53.621 "is_configured": true, 00:10:53.621 "data_offset": 2048, 00:10:53.621 "data_size": 63488 00:10:53.621 } 00:10:53.621 ] 00:10:53.621 }' 00:10:53.621 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.621 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.881 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:53.881 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.881 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.881 [2024-10-15 01:12:06.533328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:53.881 [2024-10-15 01:12:06.533425] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.881 [2024-10-15 01:12:06.535961] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.881 [2024-10-15 01:12:06.536060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.881 [2024-10-15 01:12:06.536204] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.881 [2024-10-15 01:12:06.536267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:53.881 { 00:10:53.881 "results": [ 00:10:53.881 { 00:10:53.881 "job": "raid_bdev1", 00:10:53.881 "core_mask": "0x1", 00:10:53.881 "workload": "randrw", 00:10:53.881 "percentage": 50, 00:10:53.881 "status": "finished", 00:10:53.881 "queue_depth": 1, 00:10:53.881 "io_size": 131072, 00:10:53.881 "runtime": 1.322648, 00:10:53.881 "iops": 12265.546086335897, 00:10:53.881 "mibps": 1533.193260791987, 00:10:53.881 "io_failed": 0, 00:10:53.881 "io_timeout": 0, 00:10:53.881 "avg_latency_us": 78.9323002788375, 00:10:53.881 "min_latency_us": 22.46986899563319, 00:10:53.881 "max_latency_us": 1545.3903930131005 00:10:53.881 } 00:10:53.881 ], 00:10:53.881 "core_count": 1 00:10:53.881 } 00:10:53.881 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.881 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85615 00:10:53.881 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 85615 ']' 00:10:53.881 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 85615 00:10:53.881 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:53.881 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:53.881 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85615 00:10:53.881 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:53.881 killing process with pid 85615 00:10:53.881 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:53.881 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85615' 00:10:53.881 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 85615 00:10:53.881 [2024-10-15 01:12:06.582993] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:53.881 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 85615 00:10:54.150 [2024-10-15 01:12:06.618406] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:54.150 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:54.150 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.c6OUcDggy1 00:10:54.150 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:54.150 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:54.150 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:54.150 ************************************ 00:10:54.150 END TEST raid_write_error_test 00:10:54.150 ************************************ 00:10:54.150 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:54.150 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:54.150 01:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:54.150 00:10:54.150 real 0m3.290s 00:10:54.150 user 0m4.158s 00:10:54.150 sys 0m0.512s 00:10:54.150 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:54.150 01:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.411 01:12:06 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:10:54.411 01:12:06 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:10:54.411 01:12:06 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:10:54.411 01:12:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:54.411 01:12:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:54.411 01:12:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:54.411 ************************************ 00:10:54.411 START TEST raid_rebuild_test 00:10:54.411 ************************************ 00:10:54.411 01:12:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:10:54.411 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:54.411 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:54.411 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:10:54.411 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:54.411 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:10:54.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85744 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85744 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 85744 ']' 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:54.412 01:12:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.412 [2024-10-15 01:12:06.999733] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:10:54.412 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:54.412 Zero copy mechanism will not be used. 00:10:54.412 [2024-10-15 01:12:06.999926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85744 ] 00:10:54.671 [2024-10-15 01:12:07.144870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.671 [2024-10-15 01:12:07.172494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.671 [2024-10-15 01:12:07.215994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.671 [2024-10-15 01:12:07.216109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.242 BaseBdev1_malloc 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.242 [2024-10-15 01:12:07.850976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:55.242 [2024-10-15 01:12:07.851090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.242 [2024-10-15 01:12:07.851137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:55.242 [2024-10-15 01:12:07.851171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.242 [2024-10-15 01:12:07.853383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.242 [2024-10-15 01:12:07.853453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:55.242 BaseBdev1 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.242 BaseBdev2_malloc 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.242 [2024-10-15 01:12:07.879694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:55.242 [2024-10-15 01:12:07.879743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.242 [2024-10-15 01:12:07.879779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:55.242 [2024-10-15 01:12:07.879788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.242 [2024-10-15 01:12:07.881861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.242 [2024-10-15 01:12:07.881903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:55.242 BaseBdev2 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.242 spare_malloc 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.242 spare_delay 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.242 [2024-10-15 01:12:07.920383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:55.242 [2024-10-15 01:12:07.920448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.242 [2024-10-15 01:12:07.920474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:55.242 [2024-10-15 01:12:07.920484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.242 [2024-10-15 01:12:07.922677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.242 [2024-10-15 01:12:07.922716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:55.242 spare 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.242 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.242 [2024-10-15 01:12:07.932373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.242 [2024-10-15 01:12:07.934294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.242 [2024-10-15 01:12:07.934462] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:55.242 [2024-10-15 01:12:07.934481] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:55.242 [2024-10-15 01:12:07.934808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:55.242 [2024-10-15 01:12:07.934956] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:55.243 [2024-10-15 01:12:07.934970] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:55.243 [2024-10-15 01:12:07.935094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.243 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.243 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:55.243 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.243 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.243 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.243 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.243 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:55.243 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.243 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.243 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.243 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.243 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.243 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.243 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.243 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.243 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.501 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.501 "name": "raid_bdev1", 00:10:55.501 "uuid": "2e906013-4128-4347-8ea0-3a2a8f22e194", 00:10:55.501 "strip_size_kb": 0, 00:10:55.501 "state": "online", 00:10:55.501 "raid_level": "raid1", 00:10:55.501 "superblock": false, 00:10:55.501 "num_base_bdevs": 2, 00:10:55.501 "num_base_bdevs_discovered": 2, 00:10:55.501 "num_base_bdevs_operational": 2, 00:10:55.501 "base_bdevs_list": [ 00:10:55.501 { 00:10:55.501 "name": "BaseBdev1", 00:10:55.501 "uuid": "c69bddbe-745f-5fb3-80f8-00901a324e80", 00:10:55.501 "is_configured": true, 00:10:55.501 "data_offset": 0, 00:10:55.501 "data_size": 65536 00:10:55.501 }, 00:10:55.501 { 00:10:55.501 "name": "BaseBdev2", 00:10:55.501 "uuid": "96aaecc5-f1ec-5d61-a3c1-2c16bf581b00", 00:10:55.501 "is_configured": true, 00:10:55.501 "data_offset": 0, 00:10:55.501 "data_size": 65536 00:10:55.501 } 00:10:55.501 ] 00:10:55.501 }' 00:10:55.501 01:12:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.501 01:12:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.759 01:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:55.759 01:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:55.759 01:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.759 01:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.759 [2024-10-15 01:12:08.344009] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.760 01:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.760 01:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:10:55.760 01:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.760 01:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:55.760 01:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.760 01:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.760 01:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.760 01:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:10:55.760 01:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:55.760 01:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:55.760 01:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:55.760 01:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:55.760 01:12:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:55.760 01:12:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:55.760 01:12:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:55.760 01:12:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:55.760 01:12:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:55.760 01:12:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:55.760 01:12:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:55.760 01:12:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:55.760 01:12:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:56.019 [2024-10-15 01:12:08.607323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:56.019 /dev/nbd0 00:10:56.019 01:12:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:56.019 01:12:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:56.019 01:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:56.019 01:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:10:56.019 01:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:56.019 01:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:56.019 01:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:56.019 01:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:10:56.019 01:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:56.019 01:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:56.019 01:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:56.019 1+0 records in 00:10:56.019 1+0 records out 00:10:56.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420199 s, 9.7 MB/s 00:10:56.019 01:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.019 01:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:10:56.019 01:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.019 01:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:56.019 01:12:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:10:56.019 01:12:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:56.019 01:12:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:56.019 01:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:56.019 01:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:56.019 01:12:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:00.218 65536+0 records in 00:11:00.218 65536+0 records out 00:11:00.218 33554432 bytes (34 MB, 32 MiB) copied, 3.65354 s, 9.2 MB/s 00:11:00.218 01:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:00.218 01:12:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:00.218 01:12:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:00.218 01:12:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:00.218 01:12:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:00.218 01:12:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.218 01:12:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:00.219 [2024-10-15 01:12:12.544806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.219 [2024-10-15 01:12:12.572875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.219 "name": "raid_bdev1", 00:11:00.219 "uuid": "2e906013-4128-4347-8ea0-3a2a8f22e194", 00:11:00.219 "strip_size_kb": 0, 00:11:00.219 "state": "online", 00:11:00.219 "raid_level": "raid1", 00:11:00.219 "superblock": false, 00:11:00.219 "num_base_bdevs": 2, 00:11:00.219 "num_base_bdevs_discovered": 1, 00:11:00.219 "num_base_bdevs_operational": 1, 00:11:00.219 "base_bdevs_list": [ 00:11:00.219 { 00:11:00.219 "name": null, 00:11:00.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.219 "is_configured": false, 00:11:00.219 "data_offset": 0, 00:11:00.219 "data_size": 65536 00:11:00.219 }, 00:11:00.219 { 00:11:00.219 "name": "BaseBdev2", 00:11:00.219 "uuid": "96aaecc5-f1ec-5d61-a3c1-2c16bf581b00", 00:11:00.219 "is_configured": true, 00:11:00.219 "data_offset": 0, 00:11:00.219 "data_size": 65536 00:11:00.219 } 00:11:00.219 ] 00:11:00.219 }' 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.219 01:12:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.488 01:12:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:00.488 01:12:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.488 01:12:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.488 [2024-10-15 01:12:13.052136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:00.488 [2024-10-15 01:12:13.068031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06220 00:11:00.488 01:12:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.488 01:12:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:00.488 [2024-10-15 01:12:13.070279] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:01.427 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:01.427 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:01.427 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:01.427 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:01.427 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:01.427 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.427 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.427 01:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.427 01:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.427 01:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.427 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:01.427 "name": "raid_bdev1", 00:11:01.427 "uuid": "2e906013-4128-4347-8ea0-3a2a8f22e194", 00:11:01.427 "strip_size_kb": 0, 00:11:01.427 "state": "online", 00:11:01.427 "raid_level": "raid1", 00:11:01.427 "superblock": false, 00:11:01.427 "num_base_bdevs": 2, 00:11:01.427 "num_base_bdevs_discovered": 2, 00:11:01.427 "num_base_bdevs_operational": 2, 00:11:01.427 "process": { 00:11:01.427 "type": "rebuild", 00:11:01.427 "target": "spare", 00:11:01.427 "progress": { 00:11:01.427 "blocks": 20480, 00:11:01.427 "percent": 31 00:11:01.427 } 00:11:01.427 }, 00:11:01.427 "base_bdevs_list": [ 00:11:01.427 { 00:11:01.427 "name": "spare", 00:11:01.427 "uuid": "feb10cf6-bd21-5f7d-a52d-106ab8c703e3", 00:11:01.427 "is_configured": true, 00:11:01.427 "data_offset": 0, 00:11:01.427 "data_size": 65536 00:11:01.427 }, 00:11:01.427 { 00:11:01.427 "name": "BaseBdev2", 00:11:01.427 "uuid": "96aaecc5-f1ec-5d61-a3c1-2c16bf581b00", 00:11:01.427 "is_configured": true, 00:11:01.427 "data_offset": 0, 00:11:01.428 "data_size": 65536 00:11:01.428 } 00:11:01.428 ] 00:11:01.428 }' 00:11:01.428 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.687 [2024-10-15 01:12:14.226377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:01.687 [2024-10-15 01:12:14.275352] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:01.687 [2024-10-15 01:12:14.275477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.687 [2024-10-15 01:12:14.275503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:01.687 [2024-10-15 01:12:14.275512] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.687 "name": "raid_bdev1", 00:11:01.687 "uuid": "2e906013-4128-4347-8ea0-3a2a8f22e194", 00:11:01.687 "strip_size_kb": 0, 00:11:01.687 "state": "online", 00:11:01.687 "raid_level": "raid1", 00:11:01.687 "superblock": false, 00:11:01.687 "num_base_bdevs": 2, 00:11:01.687 "num_base_bdevs_discovered": 1, 00:11:01.687 "num_base_bdevs_operational": 1, 00:11:01.687 "base_bdevs_list": [ 00:11:01.687 { 00:11:01.687 "name": null, 00:11:01.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.687 "is_configured": false, 00:11:01.687 "data_offset": 0, 00:11:01.687 "data_size": 65536 00:11:01.687 }, 00:11:01.687 { 00:11:01.687 "name": "BaseBdev2", 00:11:01.687 "uuid": "96aaecc5-f1ec-5d61-a3c1-2c16bf581b00", 00:11:01.687 "is_configured": true, 00:11:01.687 "data_offset": 0, 00:11:01.687 "data_size": 65536 00:11:01.687 } 00:11:01.687 ] 00:11:01.687 }' 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.687 01:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.257 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:02.257 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:02.257 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:02.257 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:02.257 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:02.257 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.257 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.257 01:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.257 01:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.257 01:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.257 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:02.257 "name": "raid_bdev1", 00:11:02.257 "uuid": "2e906013-4128-4347-8ea0-3a2a8f22e194", 00:11:02.257 "strip_size_kb": 0, 00:11:02.257 "state": "online", 00:11:02.257 "raid_level": "raid1", 00:11:02.257 "superblock": false, 00:11:02.257 "num_base_bdevs": 2, 00:11:02.257 "num_base_bdevs_discovered": 1, 00:11:02.257 "num_base_bdevs_operational": 1, 00:11:02.257 "base_bdevs_list": [ 00:11:02.257 { 00:11:02.257 "name": null, 00:11:02.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.257 "is_configured": false, 00:11:02.257 "data_offset": 0, 00:11:02.257 "data_size": 65536 00:11:02.257 }, 00:11:02.257 { 00:11:02.257 "name": "BaseBdev2", 00:11:02.257 "uuid": "96aaecc5-f1ec-5d61-a3c1-2c16bf581b00", 00:11:02.257 "is_configured": true, 00:11:02.257 "data_offset": 0, 00:11:02.257 "data_size": 65536 00:11:02.257 } 00:11:02.257 ] 00:11:02.257 }' 00:11:02.257 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:02.257 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:02.257 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:02.257 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:02.257 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:02.257 01:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.257 01:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.257 [2024-10-15 01:12:14.855682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:02.257 [2024-10-15 01:12:14.860628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d062f0 00:11:02.257 01:12:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.258 01:12:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:02.258 [2024-10-15 01:12:14.862513] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:03.196 01:12:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:03.196 01:12:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:03.196 01:12:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:03.197 01:12:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:03.197 01:12:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:03.197 01:12:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.197 01:12:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.197 01:12:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.197 01:12:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.197 01:12:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.197 01:12:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:03.197 "name": "raid_bdev1", 00:11:03.197 "uuid": "2e906013-4128-4347-8ea0-3a2a8f22e194", 00:11:03.197 "strip_size_kb": 0, 00:11:03.197 "state": "online", 00:11:03.197 "raid_level": "raid1", 00:11:03.197 "superblock": false, 00:11:03.197 "num_base_bdevs": 2, 00:11:03.197 "num_base_bdevs_discovered": 2, 00:11:03.197 "num_base_bdevs_operational": 2, 00:11:03.197 "process": { 00:11:03.197 "type": "rebuild", 00:11:03.197 "target": "spare", 00:11:03.197 "progress": { 00:11:03.197 "blocks": 20480, 00:11:03.197 "percent": 31 00:11:03.197 } 00:11:03.197 }, 00:11:03.197 "base_bdevs_list": [ 00:11:03.197 { 00:11:03.197 "name": "spare", 00:11:03.197 "uuid": "feb10cf6-bd21-5f7d-a52d-106ab8c703e3", 00:11:03.197 "is_configured": true, 00:11:03.197 "data_offset": 0, 00:11:03.197 "data_size": 65536 00:11:03.197 }, 00:11:03.197 { 00:11:03.197 "name": "BaseBdev2", 00:11:03.197 "uuid": "96aaecc5-f1ec-5d61-a3c1-2c16bf581b00", 00:11:03.197 "is_configured": true, 00:11:03.197 "data_offset": 0, 00:11:03.197 "data_size": 65536 00:11:03.197 } 00:11:03.197 ] 00:11:03.197 }' 00:11:03.197 01:12:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:03.457 01:12:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:03.457 01:12:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=288 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:03.457 "name": "raid_bdev1", 00:11:03.457 "uuid": "2e906013-4128-4347-8ea0-3a2a8f22e194", 00:11:03.457 "strip_size_kb": 0, 00:11:03.457 "state": "online", 00:11:03.457 "raid_level": "raid1", 00:11:03.457 "superblock": false, 00:11:03.457 "num_base_bdevs": 2, 00:11:03.457 "num_base_bdevs_discovered": 2, 00:11:03.457 "num_base_bdevs_operational": 2, 00:11:03.457 "process": { 00:11:03.457 "type": "rebuild", 00:11:03.457 "target": "spare", 00:11:03.457 "progress": { 00:11:03.457 "blocks": 22528, 00:11:03.457 "percent": 34 00:11:03.457 } 00:11:03.457 }, 00:11:03.457 "base_bdevs_list": [ 00:11:03.457 { 00:11:03.457 "name": "spare", 00:11:03.457 "uuid": "feb10cf6-bd21-5f7d-a52d-106ab8c703e3", 00:11:03.457 "is_configured": true, 00:11:03.457 "data_offset": 0, 00:11:03.457 "data_size": 65536 00:11:03.457 }, 00:11:03.457 { 00:11:03.457 "name": "BaseBdev2", 00:11:03.457 "uuid": "96aaecc5-f1ec-5d61-a3c1-2c16bf581b00", 00:11:03.457 "is_configured": true, 00:11:03.457 "data_offset": 0, 00:11:03.457 "data_size": 65536 00:11:03.457 } 00:11:03.457 ] 00:11:03.457 }' 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:03.457 01:12:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:04.839 01:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:04.839 01:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:04.839 01:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:04.839 01:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:04.839 01:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:04.839 01:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:04.839 01:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.839 01:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.839 01:12:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.839 01:12:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.839 01:12:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.839 01:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:04.839 "name": "raid_bdev1", 00:11:04.839 "uuid": "2e906013-4128-4347-8ea0-3a2a8f22e194", 00:11:04.839 "strip_size_kb": 0, 00:11:04.839 "state": "online", 00:11:04.839 "raid_level": "raid1", 00:11:04.839 "superblock": false, 00:11:04.839 "num_base_bdevs": 2, 00:11:04.839 "num_base_bdevs_discovered": 2, 00:11:04.839 "num_base_bdevs_operational": 2, 00:11:04.839 "process": { 00:11:04.839 "type": "rebuild", 00:11:04.839 "target": "spare", 00:11:04.839 "progress": { 00:11:04.839 "blocks": 45056, 00:11:04.839 "percent": 68 00:11:04.839 } 00:11:04.839 }, 00:11:04.839 "base_bdevs_list": [ 00:11:04.839 { 00:11:04.839 "name": "spare", 00:11:04.839 "uuid": "feb10cf6-bd21-5f7d-a52d-106ab8c703e3", 00:11:04.839 "is_configured": true, 00:11:04.839 "data_offset": 0, 00:11:04.839 "data_size": 65536 00:11:04.839 }, 00:11:04.839 { 00:11:04.839 "name": "BaseBdev2", 00:11:04.839 "uuid": "96aaecc5-f1ec-5d61-a3c1-2c16bf581b00", 00:11:04.839 "is_configured": true, 00:11:04.839 "data_offset": 0, 00:11:04.839 "data_size": 65536 00:11:04.839 } 00:11:04.839 ] 00:11:04.839 }' 00:11:04.839 01:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:04.839 01:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:04.839 01:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:04.839 01:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:04.839 01:12:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:05.415 [2024-10-15 01:12:18.074718] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:05.415 [2024-10-15 01:12:18.074931] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:05.415 [2024-10-15 01:12:18.075007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.679 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:05.679 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:05.679 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:05.679 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:05.679 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:05.679 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:05.679 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.679 01:12:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.679 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.679 01:12:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.679 01:12:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.679 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:05.679 "name": "raid_bdev1", 00:11:05.679 "uuid": "2e906013-4128-4347-8ea0-3a2a8f22e194", 00:11:05.679 "strip_size_kb": 0, 00:11:05.679 "state": "online", 00:11:05.679 "raid_level": "raid1", 00:11:05.679 "superblock": false, 00:11:05.679 "num_base_bdevs": 2, 00:11:05.679 "num_base_bdevs_discovered": 2, 00:11:05.679 "num_base_bdevs_operational": 2, 00:11:05.679 "base_bdevs_list": [ 00:11:05.679 { 00:11:05.679 "name": "spare", 00:11:05.679 "uuid": "feb10cf6-bd21-5f7d-a52d-106ab8c703e3", 00:11:05.679 "is_configured": true, 00:11:05.679 "data_offset": 0, 00:11:05.679 "data_size": 65536 00:11:05.679 }, 00:11:05.679 { 00:11:05.679 "name": "BaseBdev2", 00:11:05.679 "uuid": "96aaecc5-f1ec-5d61-a3c1-2c16bf581b00", 00:11:05.679 "is_configured": true, 00:11:05.679 "data_offset": 0, 00:11:05.679 "data_size": 65536 00:11:05.679 } 00:11:05.679 ] 00:11:05.679 }' 00:11:05.679 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:05.679 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:05.679 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:05.939 "name": "raid_bdev1", 00:11:05.939 "uuid": "2e906013-4128-4347-8ea0-3a2a8f22e194", 00:11:05.939 "strip_size_kb": 0, 00:11:05.939 "state": "online", 00:11:05.939 "raid_level": "raid1", 00:11:05.939 "superblock": false, 00:11:05.939 "num_base_bdevs": 2, 00:11:05.939 "num_base_bdevs_discovered": 2, 00:11:05.939 "num_base_bdevs_operational": 2, 00:11:05.939 "base_bdevs_list": [ 00:11:05.939 { 00:11:05.939 "name": "spare", 00:11:05.939 "uuid": "feb10cf6-bd21-5f7d-a52d-106ab8c703e3", 00:11:05.939 "is_configured": true, 00:11:05.939 "data_offset": 0, 00:11:05.939 "data_size": 65536 00:11:05.939 }, 00:11:05.939 { 00:11:05.939 "name": "BaseBdev2", 00:11:05.939 "uuid": "96aaecc5-f1ec-5d61-a3c1-2c16bf581b00", 00:11:05.939 "is_configured": true, 00:11:05.939 "data_offset": 0, 00:11:05.939 "data_size": 65536 00:11:05.939 } 00:11:05.939 ] 00:11:05.939 }' 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.939 "name": "raid_bdev1", 00:11:05.939 "uuid": "2e906013-4128-4347-8ea0-3a2a8f22e194", 00:11:05.939 "strip_size_kb": 0, 00:11:05.939 "state": "online", 00:11:05.939 "raid_level": "raid1", 00:11:05.939 "superblock": false, 00:11:05.939 "num_base_bdevs": 2, 00:11:05.939 "num_base_bdevs_discovered": 2, 00:11:05.939 "num_base_bdevs_operational": 2, 00:11:05.939 "base_bdevs_list": [ 00:11:05.939 { 00:11:05.939 "name": "spare", 00:11:05.939 "uuid": "feb10cf6-bd21-5f7d-a52d-106ab8c703e3", 00:11:05.939 "is_configured": true, 00:11:05.939 "data_offset": 0, 00:11:05.939 "data_size": 65536 00:11:05.939 }, 00:11:05.939 { 00:11:05.939 "name": "BaseBdev2", 00:11:05.939 "uuid": "96aaecc5-f1ec-5d61-a3c1-2c16bf581b00", 00:11:05.939 "is_configured": true, 00:11:05.939 "data_offset": 0, 00:11:05.939 "data_size": 65536 00:11:05.939 } 00:11:05.939 ] 00:11:05.939 }' 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.939 01:12:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.509 [2024-10-15 01:12:19.046358] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:06.509 [2024-10-15 01:12:19.046438] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.509 [2024-10-15 01:12:19.046552] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.509 [2024-10-15 01:12:19.046650] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.509 [2024-10-15 01:12:19.046704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:06.509 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:06.769 /dev/nbd0 00:11:06.769 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:06.769 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:06.769 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:06.769 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:06.769 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:06.769 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:06.769 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:06.769 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:06.769 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:06.769 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:06.769 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:06.769 1+0 records in 00:11:06.769 1+0 records out 00:11:06.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368991 s, 11.1 MB/s 00:11:06.769 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:06.769 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:06.769 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:06.769 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:06.769 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:06.769 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:06.769 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:06.769 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:07.029 /dev/nbd1 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:07.029 1+0 records in 00:11:07.029 1+0 records out 00:11:07.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362832 s, 11.3 MB/s 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:07.029 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:07.289 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:07.289 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:07.289 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:07.289 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:07.289 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:07.289 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:07.289 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:07.289 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:07.289 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:07.289 01:12:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:07.550 01:12:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:07.550 01:12:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:07.550 01:12:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:07.550 01:12:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:07.550 01:12:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:07.550 01:12:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:07.550 01:12:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:07.550 01:12:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:07.550 01:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:07.550 01:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85744 00:11:07.550 01:12:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 85744 ']' 00:11:07.550 01:12:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 85744 00:11:07.550 01:12:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:11:07.550 01:12:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:07.550 01:12:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85744 00:11:07.550 01:12:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:07.550 killing process with pid 85744 00:11:07.550 Received shutdown signal, test time was about 60.000000 seconds 00:11:07.550 00:11:07.550 Latency(us) 00:11:07.550 [2024-10-15T01:12:20.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:07.550 [2024-10-15T01:12:20.274Z] =================================================================================================================== 00:11:07.550 [2024-10-15T01:12:20.274Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:07.550 01:12:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:07.550 01:12:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85744' 00:11:07.550 01:12:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 85744 00:11:07.550 [2024-10-15 01:12:20.132894] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:07.550 01:12:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 85744 00:11:07.550 [2024-10-15 01:12:20.162384] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:07.818 01:12:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:07.818 00:11:07.818 real 0m13.462s 00:11:07.818 user 0m15.803s 00:11:07.818 sys 0m2.744s 00:11:07.818 01:12:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.818 01:12:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.818 ************************************ 00:11:07.818 END TEST raid_rebuild_test 00:11:07.818 ************************************ 00:11:07.818 01:12:20 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:11:07.818 01:12:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:07.818 01:12:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.818 01:12:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:07.818 ************************************ 00:11:07.818 START TEST raid_rebuild_test_sb 00:11:07.818 ************************************ 00:11:07.818 01:12:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:11:07.818 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:07.818 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:07.818 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:07.818 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:07.818 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:07.818 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:07.818 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:07.818 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:07.818 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:07.818 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:07.818 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:07.818 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:07.818 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:07.819 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:07.819 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:07.819 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:07.819 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:07.819 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:07.819 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:07.819 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:07.819 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:07.819 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:07.819 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:07.819 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:07.819 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86145 00:11:07.819 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86145 00:11:07.819 01:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:07.819 01:12:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86145 ']' 00:11:07.819 01:12:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.819 01:12:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:07.819 01:12:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.819 01:12:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:07.819 01:12:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.819 [2024-10-15 01:12:20.526166] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:11:07.819 [2024-10-15 01:12:20.526392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:11:07.819 Zero copy mechanism will not be used. 00:11:07.819 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86145 ] 00:11:08.091 [2024-10-15 01:12:20.672594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.091 [2024-10-15 01:12:20.699444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.091 [2024-10-15 01:12:20.742476] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.091 [2024-10-15 01:12:20.742595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.661 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:08.661 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:08.661 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:08.661 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:08.661 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.661 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.661 BaseBdev1_malloc 00:11:08.661 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.661 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:08.661 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.661 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.661 [2024-10-15 01:12:21.373033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:08.661 [2024-10-15 01:12:21.373136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.661 [2024-10-15 01:12:21.373165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:08.661 [2024-10-15 01:12:21.373188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.661 [2024-10-15 01:12:21.375319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.661 [2024-10-15 01:12:21.375353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:08.661 BaseBdev1 00:11:08.661 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.661 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:08.661 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:08.661 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.661 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.921 BaseBdev2_malloc 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.921 [2024-10-15 01:12:21.401789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:08.921 [2024-10-15 01:12:21.401836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.921 [2024-10-15 01:12:21.401874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:08.921 [2024-10-15 01:12:21.401883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.921 [2024-10-15 01:12:21.403984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.921 [2024-10-15 01:12:21.404025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:08.921 BaseBdev2 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.921 spare_malloc 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.921 spare_delay 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.921 [2024-10-15 01:12:21.442347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:08.921 [2024-10-15 01:12:21.442449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.921 [2024-10-15 01:12:21.442490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:08.921 [2024-10-15 01:12:21.442517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.921 [2024-10-15 01:12:21.444544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.921 [2024-10-15 01:12:21.444610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:08.921 spare 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.921 [2024-10-15 01:12:21.454375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:08.921 [2024-10-15 01:12:21.456134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.921 [2024-10-15 01:12:21.456357] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:08.921 [2024-10-15 01:12:21.456405] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:08.921 [2024-10-15 01:12:21.456706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:08.921 [2024-10-15 01:12:21.456871] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:08.921 [2024-10-15 01:12:21.456919] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:08.921 [2024-10-15 01:12:21.457091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.921 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.921 "name": "raid_bdev1", 00:11:08.921 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:08.921 "strip_size_kb": 0, 00:11:08.921 "state": "online", 00:11:08.921 "raid_level": "raid1", 00:11:08.921 "superblock": true, 00:11:08.922 "num_base_bdevs": 2, 00:11:08.922 "num_base_bdevs_discovered": 2, 00:11:08.922 "num_base_bdevs_operational": 2, 00:11:08.922 "base_bdevs_list": [ 00:11:08.922 { 00:11:08.922 "name": "BaseBdev1", 00:11:08.922 "uuid": "6b3831d4-7716-5cf8-9da5-3b68ff6a520b", 00:11:08.922 "is_configured": true, 00:11:08.922 "data_offset": 2048, 00:11:08.922 "data_size": 63488 00:11:08.922 }, 00:11:08.922 { 00:11:08.922 "name": "BaseBdev2", 00:11:08.922 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:08.922 "is_configured": true, 00:11:08.922 "data_offset": 2048, 00:11:08.922 "data_size": 63488 00:11:08.922 } 00:11:08.922 ] 00:11:08.922 }' 00:11:08.922 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.922 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.182 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:09.182 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:09.182 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.182 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.442 [2024-10-15 01:12:21.905901] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.442 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.442 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:09.442 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.442 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:09.442 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.442 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.442 01:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.442 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:09.442 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:09.442 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:09.442 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:09.442 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:09.442 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:09.442 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:09.442 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:09.442 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:09.442 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:09.442 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:09.442 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:09.442 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:09.442 01:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:09.700 [2024-10-15 01:12:22.169246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:09.700 /dev/nbd0 00:11:09.700 01:12:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:09.700 01:12:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:09.700 01:12:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:09.700 01:12:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:09.700 01:12:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:09.700 01:12:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:09.700 01:12:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:09.700 01:12:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:09.700 01:12:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:09.700 01:12:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:09.700 01:12:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:09.700 1+0 records in 00:11:09.700 1+0 records out 00:11:09.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000534381 s, 7.7 MB/s 00:11:09.700 01:12:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:09.700 01:12:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:09.700 01:12:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:09.700 01:12:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:09.701 01:12:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:09.701 01:12:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:09.701 01:12:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:09.701 01:12:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:09.701 01:12:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:09.701 01:12:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:13.889 63488+0 records in 00:11:13.889 63488+0 records out 00:11:13.889 32505856 bytes (33 MB, 31 MiB) copied, 3.56309 s, 9.1 MB/s 00:11:13.889 01:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:13.889 01:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:13.889 01:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:13.889 01:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:13.889 01:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:13.889 01:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.889 01:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:13.889 [2024-10-15 01:12:26.013691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.889 [2024-10-15 01:12:26.030933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.889 "name": "raid_bdev1", 00:11:13.889 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:13.889 "strip_size_kb": 0, 00:11:13.889 "state": "online", 00:11:13.889 "raid_level": "raid1", 00:11:13.889 "superblock": true, 00:11:13.889 "num_base_bdevs": 2, 00:11:13.889 "num_base_bdevs_discovered": 1, 00:11:13.889 "num_base_bdevs_operational": 1, 00:11:13.889 "base_bdevs_list": [ 00:11:13.889 { 00:11:13.889 "name": null, 00:11:13.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.889 "is_configured": false, 00:11:13.889 "data_offset": 0, 00:11:13.889 "data_size": 63488 00:11:13.889 }, 00:11:13.889 { 00:11:13.889 "name": "BaseBdev2", 00:11:13.889 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:13.889 "is_configured": true, 00:11:13.889 "data_offset": 2048, 00:11:13.889 "data_size": 63488 00:11:13.889 } 00:11:13.889 ] 00:11:13.889 }' 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.889 [2024-10-15 01:12:26.510161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:13.889 [2024-10-15 01:12:26.533824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e280 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.889 01:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:13.889 [2024-10-15 01:12:26.536543] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:14.828 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:14.828 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:14.828 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:14.828 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:14.828 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:14.828 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.828 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.828 01:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.828 01:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:15.088 "name": "raid_bdev1", 00:11:15.088 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:15.088 "strip_size_kb": 0, 00:11:15.088 "state": "online", 00:11:15.088 "raid_level": "raid1", 00:11:15.088 "superblock": true, 00:11:15.088 "num_base_bdevs": 2, 00:11:15.088 "num_base_bdevs_discovered": 2, 00:11:15.088 "num_base_bdevs_operational": 2, 00:11:15.088 "process": { 00:11:15.088 "type": "rebuild", 00:11:15.088 "target": "spare", 00:11:15.088 "progress": { 00:11:15.088 "blocks": 20480, 00:11:15.088 "percent": 32 00:11:15.088 } 00:11:15.088 }, 00:11:15.088 "base_bdevs_list": [ 00:11:15.088 { 00:11:15.088 "name": "spare", 00:11:15.088 "uuid": "eed6f003-a26b-59b8-b8ab-a08dd339878b", 00:11:15.088 "is_configured": true, 00:11:15.088 "data_offset": 2048, 00:11:15.088 "data_size": 63488 00:11:15.088 }, 00:11:15.088 { 00:11:15.088 "name": "BaseBdev2", 00:11:15.088 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:15.088 "is_configured": true, 00:11:15.088 "data_offset": 2048, 00:11:15.088 "data_size": 63488 00:11:15.088 } 00:11:15.088 ] 00:11:15.088 }' 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.088 [2024-10-15 01:12:27.675966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:15.088 [2024-10-15 01:12:27.742248] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:15.088 [2024-10-15 01:12:27.742325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.088 [2024-10-15 01:12:27.742346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:15.088 [2024-10-15 01:12:27.742354] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.088 "name": "raid_bdev1", 00:11:15.088 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:15.088 "strip_size_kb": 0, 00:11:15.088 "state": "online", 00:11:15.088 "raid_level": "raid1", 00:11:15.088 "superblock": true, 00:11:15.088 "num_base_bdevs": 2, 00:11:15.088 "num_base_bdevs_discovered": 1, 00:11:15.088 "num_base_bdevs_operational": 1, 00:11:15.088 "base_bdevs_list": [ 00:11:15.088 { 00:11:15.088 "name": null, 00:11:15.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.088 "is_configured": false, 00:11:15.088 "data_offset": 0, 00:11:15.088 "data_size": 63488 00:11:15.088 }, 00:11:15.088 { 00:11:15.088 "name": "BaseBdev2", 00:11:15.088 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:15.088 "is_configured": true, 00:11:15.088 "data_offset": 2048, 00:11:15.088 "data_size": 63488 00:11:15.088 } 00:11:15.088 ] 00:11:15.088 }' 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.088 01:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.662 01:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:15.662 01:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:15.662 01:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:15.662 01:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:15.662 01:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:15.662 01:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.662 01:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.662 01:12:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.662 01:12:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.662 01:12:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.662 01:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:15.662 "name": "raid_bdev1", 00:11:15.662 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:15.662 "strip_size_kb": 0, 00:11:15.662 "state": "online", 00:11:15.662 "raid_level": "raid1", 00:11:15.662 "superblock": true, 00:11:15.662 "num_base_bdevs": 2, 00:11:15.662 "num_base_bdevs_discovered": 1, 00:11:15.662 "num_base_bdevs_operational": 1, 00:11:15.662 "base_bdevs_list": [ 00:11:15.662 { 00:11:15.662 "name": null, 00:11:15.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.662 "is_configured": false, 00:11:15.662 "data_offset": 0, 00:11:15.662 "data_size": 63488 00:11:15.663 }, 00:11:15.663 { 00:11:15.663 "name": "BaseBdev2", 00:11:15.663 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:15.663 "is_configured": true, 00:11:15.663 "data_offset": 2048, 00:11:15.663 "data_size": 63488 00:11:15.663 } 00:11:15.663 ] 00:11:15.663 }' 00:11:15.663 01:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:15.663 01:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:15.663 01:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:15.663 01:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:15.663 01:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:15.663 01:12:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.663 01:12:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.663 [2024-10-15 01:12:28.354460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:15.663 [2024-10-15 01:12:28.359550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e350 00:11:15.663 01:12:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.663 01:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:15.663 [2024-10-15 01:12:28.361553] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:16.677 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:16.677 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:16.677 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:16.677 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:16.677 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:16.677 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.677 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.677 01:12:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.677 01:12:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.677 01:12:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:16.938 "name": "raid_bdev1", 00:11:16.938 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:16.938 "strip_size_kb": 0, 00:11:16.938 "state": "online", 00:11:16.938 "raid_level": "raid1", 00:11:16.938 "superblock": true, 00:11:16.938 "num_base_bdevs": 2, 00:11:16.938 "num_base_bdevs_discovered": 2, 00:11:16.938 "num_base_bdevs_operational": 2, 00:11:16.938 "process": { 00:11:16.938 "type": "rebuild", 00:11:16.938 "target": "spare", 00:11:16.938 "progress": { 00:11:16.938 "blocks": 20480, 00:11:16.938 "percent": 32 00:11:16.938 } 00:11:16.938 }, 00:11:16.938 "base_bdevs_list": [ 00:11:16.938 { 00:11:16.938 "name": "spare", 00:11:16.938 "uuid": "eed6f003-a26b-59b8-b8ab-a08dd339878b", 00:11:16.938 "is_configured": true, 00:11:16.938 "data_offset": 2048, 00:11:16.938 "data_size": 63488 00:11:16.938 }, 00:11:16.938 { 00:11:16.938 "name": "BaseBdev2", 00:11:16.938 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:16.938 "is_configured": true, 00:11:16.938 "data_offset": 2048, 00:11:16.938 "data_size": 63488 00:11:16.938 } 00:11:16.938 ] 00:11:16.938 }' 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:16.938 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=301 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:16.938 "name": "raid_bdev1", 00:11:16.938 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:16.938 "strip_size_kb": 0, 00:11:16.938 "state": "online", 00:11:16.938 "raid_level": "raid1", 00:11:16.938 "superblock": true, 00:11:16.938 "num_base_bdevs": 2, 00:11:16.938 "num_base_bdevs_discovered": 2, 00:11:16.938 "num_base_bdevs_operational": 2, 00:11:16.938 "process": { 00:11:16.938 "type": "rebuild", 00:11:16.938 "target": "spare", 00:11:16.938 "progress": { 00:11:16.938 "blocks": 22528, 00:11:16.938 "percent": 35 00:11:16.938 } 00:11:16.938 }, 00:11:16.938 "base_bdevs_list": [ 00:11:16.938 { 00:11:16.938 "name": "spare", 00:11:16.938 "uuid": "eed6f003-a26b-59b8-b8ab-a08dd339878b", 00:11:16.938 "is_configured": true, 00:11:16.938 "data_offset": 2048, 00:11:16.938 "data_size": 63488 00:11:16.938 }, 00:11:16.938 { 00:11:16.938 "name": "BaseBdev2", 00:11:16.938 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:16.938 "is_configured": true, 00:11:16.938 "data_offset": 2048, 00:11:16.938 "data_size": 63488 00:11:16.938 } 00:11:16.938 ] 00:11:16.938 }' 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:16.938 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:17.198 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:17.198 01:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:18.138 01:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:18.138 01:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:18.138 01:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:18.138 01:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:18.138 01:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:18.138 01:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:18.138 01:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.138 01:12:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.138 01:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.138 01:12:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.138 01:12:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.138 01:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:18.138 "name": "raid_bdev1", 00:11:18.138 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:18.138 "strip_size_kb": 0, 00:11:18.138 "state": "online", 00:11:18.138 "raid_level": "raid1", 00:11:18.138 "superblock": true, 00:11:18.138 "num_base_bdevs": 2, 00:11:18.138 "num_base_bdevs_discovered": 2, 00:11:18.138 "num_base_bdevs_operational": 2, 00:11:18.138 "process": { 00:11:18.138 "type": "rebuild", 00:11:18.138 "target": "spare", 00:11:18.138 "progress": { 00:11:18.138 "blocks": 47104, 00:11:18.138 "percent": 74 00:11:18.138 } 00:11:18.138 }, 00:11:18.138 "base_bdevs_list": [ 00:11:18.138 { 00:11:18.138 "name": "spare", 00:11:18.138 "uuid": "eed6f003-a26b-59b8-b8ab-a08dd339878b", 00:11:18.138 "is_configured": true, 00:11:18.138 "data_offset": 2048, 00:11:18.138 "data_size": 63488 00:11:18.138 }, 00:11:18.138 { 00:11:18.138 "name": "BaseBdev2", 00:11:18.138 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:18.138 "is_configured": true, 00:11:18.138 "data_offset": 2048, 00:11:18.138 "data_size": 63488 00:11:18.138 } 00:11:18.138 ] 00:11:18.138 }' 00:11:18.138 01:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:18.138 01:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:18.138 01:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:18.138 01:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:18.139 01:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:19.079 [2024-10-15 01:12:31.474287] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:19.079 [2024-10-15 01:12:31.474479] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:19.079 [2024-10-15 01:12:31.474659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:19.340 "name": "raid_bdev1", 00:11:19.340 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:19.340 "strip_size_kb": 0, 00:11:19.340 "state": "online", 00:11:19.340 "raid_level": "raid1", 00:11:19.340 "superblock": true, 00:11:19.340 "num_base_bdevs": 2, 00:11:19.340 "num_base_bdevs_discovered": 2, 00:11:19.340 "num_base_bdevs_operational": 2, 00:11:19.340 "base_bdevs_list": [ 00:11:19.340 { 00:11:19.340 "name": "spare", 00:11:19.340 "uuid": "eed6f003-a26b-59b8-b8ab-a08dd339878b", 00:11:19.340 "is_configured": true, 00:11:19.340 "data_offset": 2048, 00:11:19.340 "data_size": 63488 00:11:19.340 }, 00:11:19.340 { 00:11:19.340 "name": "BaseBdev2", 00:11:19.340 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:19.340 "is_configured": true, 00:11:19.340 "data_offset": 2048, 00:11:19.340 "data_size": 63488 00:11:19.340 } 00:11:19.340 ] 00:11:19.340 }' 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:19.340 "name": "raid_bdev1", 00:11:19.340 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:19.340 "strip_size_kb": 0, 00:11:19.340 "state": "online", 00:11:19.340 "raid_level": "raid1", 00:11:19.340 "superblock": true, 00:11:19.340 "num_base_bdevs": 2, 00:11:19.340 "num_base_bdevs_discovered": 2, 00:11:19.340 "num_base_bdevs_operational": 2, 00:11:19.340 "base_bdevs_list": [ 00:11:19.340 { 00:11:19.340 "name": "spare", 00:11:19.340 "uuid": "eed6f003-a26b-59b8-b8ab-a08dd339878b", 00:11:19.340 "is_configured": true, 00:11:19.340 "data_offset": 2048, 00:11:19.340 "data_size": 63488 00:11:19.340 }, 00:11:19.340 { 00:11:19.340 "name": "BaseBdev2", 00:11:19.340 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:19.340 "is_configured": true, 00:11:19.340 "data_offset": 2048, 00:11:19.340 "data_size": 63488 00:11:19.340 } 00:11:19.340 ] 00:11:19.340 }' 00:11:19.340 01:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:19.340 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:19.340 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:19.600 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:19.600 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:19.600 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.600 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.600 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.600 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.600 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:19.600 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.600 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.600 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.600 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.600 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.600 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.600 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.600 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.600 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.600 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.600 "name": "raid_bdev1", 00:11:19.600 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:19.600 "strip_size_kb": 0, 00:11:19.600 "state": "online", 00:11:19.600 "raid_level": "raid1", 00:11:19.600 "superblock": true, 00:11:19.600 "num_base_bdevs": 2, 00:11:19.600 "num_base_bdevs_discovered": 2, 00:11:19.600 "num_base_bdevs_operational": 2, 00:11:19.600 "base_bdevs_list": [ 00:11:19.600 { 00:11:19.600 "name": "spare", 00:11:19.600 "uuid": "eed6f003-a26b-59b8-b8ab-a08dd339878b", 00:11:19.600 "is_configured": true, 00:11:19.600 "data_offset": 2048, 00:11:19.600 "data_size": 63488 00:11:19.600 }, 00:11:19.600 { 00:11:19.600 "name": "BaseBdev2", 00:11:19.600 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:19.600 "is_configured": true, 00:11:19.600 "data_offset": 2048, 00:11:19.600 "data_size": 63488 00:11:19.600 } 00:11:19.600 ] 00:11:19.600 }' 00:11:19.600 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.600 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.861 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:19.861 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.861 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.861 [2024-10-15 01:12:32.529686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:19.861 [2024-10-15 01:12:32.529778] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.861 [2024-10-15 01:12:32.529898] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.861 [2024-10-15 01:12:32.530004] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.861 [2024-10-15 01:12:32.530074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:19.861 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.861 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.861 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:19.861 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.861 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.861 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.121 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:20.121 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:20.121 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:20.121 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:20.121 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:20.121 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:20.121 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:20.121 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:20.121 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:20.122 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:20.122 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:20.122 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:20.122 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:20.122 /dev/nbd0 00:11:20.122 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:20.122 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:20.122 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:20.122 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:20.122 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:20.122 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:20.122 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:20.122 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:20.122 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:20.122 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:20.122 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:20.122 1+0 records in 00:11:20.122 1+0 records out 00:11:20.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347043 s, 11.8 MB/s 00:11:20.122 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.122 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:20.122 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.382 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:20.382 01:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:20.382 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:20.382 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:20.382 01:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:20.382 /dev/nbd1 00:11:20.382 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:20.382 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:20.382 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:20.382 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:20.382 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:20.382 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:20.382 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:20.382 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:20.382 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:20.382 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:20.382 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:20.382 1+0 records in 00:11:20.382 1+0 records out 00:11:20.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039765 s, 10.3 MB/s 00:11:20.382 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.642 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:20.642 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.642 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:20.642 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:20.642 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:20.642 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:20.642 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:20.642 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:20.642 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:20.642 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:20.642 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:20.642 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:20.642 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:20.642 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.903 [2024-10-15 01:12:33.606808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:20.903 [2024-10-15 01:12:33.606868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.903 [2024-10-15 01:12:33.606889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:20.903 [2024-10-15 01:12:33.606902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.903 [2024-10-15 01:12:33.609093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.903 [2024-10-15 01:12:33.609134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:20.903 [2024-10-15 01:12:33.609229] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:20.903 [2024-10-15 01:12:33.609293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:20.903 [2024-10-15 01:12:33.609413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.903 spare 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.903 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.163 [2024-10-15 01:12:33.709336] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:11:21.163 [2024-10-15 01:12:33.709397] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:21.163 [2024-10-15 01:12:33.709755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cae960 00:11:21.163 [2024-10-15 01:12:33.709928] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:11:21.163 [2024-10-15 01:12:33.709942] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:11:21.163 [2024-10-15 01:12:33.710095] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.163 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.163 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:21.163 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.163 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.163 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.163 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.163 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:21.163 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.163 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.163 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.163 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.163 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.163 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.163 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.163 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.163 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.163 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.163 "name": "raid_bdev1", 00:11:21.163 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:21.163 "strip_size_kb": 0, 00:11:21.164 "state": "online", 00:11:21.164 "raid_level": "raid1", 00:11:21.164 "superblock": true, 00:11:21.164 "num_base_bdevs": 2, 00:11:21.164 "num_base_bdevs_discovered": 2, 00:11:21.164 "num_base_bdevs_operational": 2, 00:11:21.164 "base_bdevs_list": [ 00:11:21.164 { 00:11:21.164 "name": "spare", 00:11:21.164 "uuid": "eed6f003-a26b-59b8-b8ab-a08dd339878b", 00:11:21.164 "is_configured": true, 00:11:21.164 "data_offset": 2048, 00:11:21.164 "data_size": 63488 00:11:21.164 }, 00:11:21.164 { 00:11:21.164 "name": "BaseBdev2", 00:11:21.164 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:21.164 "is_configured": true, 00:11:21.164 "data_offset": 2048, 00:11:21.164 "data_size": 63488 00:11:21.164 } 00:11:21.164 ] 00:11:21.164 }' 00:11:21.164 01:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.164 01:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:21.734 "name": "raid_bdev1", 00:11:21.734 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:21.734 "strip_size_kb": 0, 00:11:21.734 "state": "online", 00:11:21.734 "raid_level": "raid1", 00:11:21.734 "superblock": true, 00:11:21.734 "num_base_bdevs": 2, 00:11:21.734 "num_base_bdevs_discovered": 2, 00:11:21.734 "num_base_bdevs_operational": 2, 00:11:21.734 "base_bdevs_list": [ 00:11:21.734 { 00:11:21.734 "name": "spare", 00:11:21.734 "uuid": "eed6f003-a26b-59b8-b8ab-a08dd339878b", 00:11:21.734 "is_configured": true, 00:11:21.734 "data_offset": 2048, 00:11:21.734 "data_size": 63488 00:11:21.734 }, 00:11:21.734 { 00:11:21.734 "name": "BaseBdev2", 00:11:21.734 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:21.734 "is_configured": true, 00:11:21.734 "data_offset": 2048, 00:11:21.734 "data_size": 63488 00:11:21.734 } 00:11:21.734 ] 00:11:21.734 }' 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.734 [2024-10-15 01:12:34.369531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.734 "name": "raid_bdev1", 00:11:21.734 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:21.734 "strip_size_kb": 0, 00:11:21.734 "state": "online", 00:11:21.734 "raid_level": "raid1", 00:11:21.734 "superblock": true, 00:11:21.734 "num_base_bdevs": 2, 00:11:21.734 "num_base_bdevs_discovered": 1, 00:11:21.734 "num_base_bdevs_operational": 1, 00:11:21.734 "base_bdevs_list": [ 00:11:21.734 { 00:11:21.734 "name": null, 00:11:21.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.734 "is_configured": false, 00:11:21.734 "data_offset": 0, 00:11:21.734 "data_size": 63488 00:11:21.734 }, 00:11:21.734 { 00:11:21.734 "name": "BaseBdev2", 00:11:21.734 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:21.734 "is_configured": true, 00:11:21.734 "data_offset": 2048, 00:11:21.734 "data_size": 63488 00:11:21.734 } 00:11:21.734 ] 00:11:21.734 }' 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.734 01:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.305 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:22.305 01:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.305 01:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.305 [2024-10-15 01:12:34.804857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:22.305 [2024-10-15 01:12:34.805044] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:22.305 [2024-10-15 01:12:34.805068] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:22.305 [2024-10-15 01:12:34.805113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:22.305 [2024-10-15 01:12:34.809871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caea30 00:11:22.305 01:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.305 01:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:22.305 [2024-10-15 01:12:34.811738] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:23.244 01:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:23.244 01:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:23.244 01:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:23.244 01:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:23.245 01:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:23.245 01:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.245 01:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.245 01:12:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.245 01:12:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.245 01:12:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.245 01:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:23.245 "name": "raid_bdev1", 00:11:23.245 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:23.245 "strip_size_kb": 0, 00:11:23.245 "state": "online", 00:11:23.245 "raid_level": "raid1", 00:11:23.245 "superblock": true, 00:11:23.245 "num_base_bdevs": 2, 00:11:23.245 "num_base_bdevs_discovered": 2, 00:11:23.245 "num_base_bdevs_operational": 2, 00:11:23.245 "process": { 00:11:23.245 "type": "rebuild", 00:11:23.245 "target": "spare", 00:11:23.245 "progress": { 00:11:23.245 "blocks": 20480, 00:11:23.245 "percent": 32 00:11:23.245 } 00:11:23.245 }, 00:11:23.245 "base_bdevs_list": [ 00:11:23.245 { 00:11:23.245 "name": "spare", 00:11:23.245 "uuid": "eed6f003-a26b-59b8-b8ab-a08dd339878b", 00:11:23.245 "is_configured": true, 00:11:23.245 "data_offset": 2048, 00:11:23.245 "data_size": 63488 00:11:23.245 }, 00:11:23.245 { 00:11:23.245 "name": "BaseBdev2", 00:11:23.245 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:23.245 "is_configured": true, 00:11:23.245 "data_offset": 2048, 00:11:23.245 "data_size": 63488 00:11:23.245 } 00:11:23.245 ] 00:11:23.245 }' 00:11:23.245 01:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:23.245 01:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:23.245 01:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:23.245 01:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:23.245 01:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:23.245 01:12:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.245 01:12:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.245 [2024-10-15 01:12:35.948029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:23.504 [2024-10-15 01:12:36.016398] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:23.504 [2024-10-15 01:12:36.016479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.504 [2024-10-15 01:12:36.016499] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:23.504 [2024-10-15 01:12:36.016506] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:23.504 01:12:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.504 01:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:23.504 01:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.504 01:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.504 01:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.504 01:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.504 01:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:23.504 01:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.504 01:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.504 01:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.504 01:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.504 01:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.504 01:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.504 01:12:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.504 01:12:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.504 01:12:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.504 01:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.504 "name": "raid_bdev1", 00:11:23.504 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:23.504 "strip_size_kb": 0, 00:11:23.504 "state": "online", 00:11:23.504 "raid_level": "raid1", 00:11:23.504 "superblock": true, 00:11:23.504 "num_base_bdevs": 2, 00:11:23.504 "num_base_bdevs_discovered": 1, 00:11:23.504 "num_base_bdevs_operational": 1, 00:11:23.504 "base_bdevs_list": [ 00:11:23.504 { 00:11:23.504 "name": null, 00:11:23.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.504 "is_configured": false, 00:11:23.504 "data_offset": 0, 00:11:23.504 "data_size": 63488 00:11:23.504 }, 00:11:23.504 { 00:11:23.504 "name": "BaseBdev2", 00:11:23.504 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:23.504 "is_configured": true, 00:11:23.504 "data_offset": 2048, 00:11:23.504 "data_size": 63488 00:11:23.504 } 00:11:23.504 ] 00:11:23.504 }' 00:11:23.504 01:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.504 01:12:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.766 01:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:23.766 01:12:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.766 01:12:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.766 [2024-10-15 01:12:36.456661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:23.766 [2024-10-15 01:12:36.456824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.766 [2024-10-15 01:12:36.456885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:23.766 [2024-10-15 01:12:36.456916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.766 [2024-10-15 01:12:36.457421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.766 [2024-10-15 01:12:36.457484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:23.766 [2024-10-15 01:12:36.457611] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:23.766 [2024-10-15 01:12:36.457652] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:23.766 [2024-10-15 01:12:36.457701] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:23.766 [2024-10-15 01:12:36.457768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:23.766 [2024-10-15 01:12:36.462711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:11:23.766 spare 00:11:23.766 01:12:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.766 01:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:23.766 [2024-10-15 01:12:36.464633] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.154 "name": "raid_bdev1", 00:11:25.154 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:25.154 "strip_size_kb": 0, 00:11:25.154 "state": "online", 00:11:25.154 "raid_level": "raid1", 00:11:25.154 "superblock": true, 00:11:25.154 "num_base_bdevs": 2, 00:11:25.154 "num_base_bdevs_discovered": 2, 00:11:25.154 "num_base_bdevs_operational": 2, 00:11:25.154 "process": { 00:11:25.154 "type": "rebuild", 00:11:25.154 "target": "spare", 00:11:25.154 "progress": { 00:11:25.154 "blocks": 20480, 00:11:25.154 "percent": 32 00:11:25.154 } 00:11:25.154 }, 00:11:25.154 "base_bdevs_list": [ 00:11:25.154 { 00:11:25.154 "name": "spare", 00:11:25.154 "uuid": "eed6f003-a26b-59b8-b8ab-a08dd339878b", 00:11:25.154 "is_configured": true, 00:11:25.154 "data_offset": 2048, 00:11:25.154 "data_size": 63488 00:11:25.154 }, 00:11:25.154 { 00:11:25.154 "name": "BaseBdev2", 00:11:25.154 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:25.154 "is_configured": true, 00:11:25.154 "data_offset": 2048, 00:11:25.154 "data_size": 63488 00:11:25.154 } 00:11:25.154 ] 00:11:25.154 }' 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.154 [2024-10-15 01:12:37.629127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:25.154 [2024-10-15 01:12:37.669474] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:25.154 [2024-10-15 01:12:37.669548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.154 [2024-10-15 01:12:37.669562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:25.154 [2024-10-15 01:12:37.669571] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.154 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.154 "name": "raid_bdev1", 00:11:25.154 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:25.154 "strip_size_kb": 0, 00:11:25.155 "state": "online", 00:11:25.155 "raid_level": "raid1", 00:11:25.155 "superblock": true, 00:11:25.155 "num_base_bdevs": 2, 00:11:25.155 "num_base_bdevs_discovered": 1, 00:11:25.155 "num_base_bdevs_operational": 1, 00:11:25.155 "base_bdevs_list": [ 00:11:25.155 { 00:11:25.155 "name": null, 00:11:25.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.155 "is_configured": false, 00:11:25.155 "data_offset": 0, 00:11:25.155 "data_size": 63488 00:11:25.155 }, 00:11:25.155 { 00:11:25.155 "name": "BaseBdev2", 00:11:25.155 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:25.155 "is_configured": true, 00:11:25.155 "data_offset": 2048, 00:11:25.155 "data_size": 63488 00:11:25.155 } 00:11:25.155 ] 00:11:25.155 }' 00:11:25.155 01:12:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.155 01:12:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.414 01:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:25.414 01:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.414 01:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:25.414 01:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:25.414 01:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.414 01:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.414 01:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.414 01:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.414 01:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.414 01:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.673 01:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.673 "name": "raid_bdev1", 00:11:25.673 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:25.673 "strip_size_kb": 0, 00:11:25.673 "state": "online", 00:11:25.673 "raid_level": "raid1", 00:11:25.673 "superblock": true, 00:11:25.673 "num_base_bdevs": 2, 00:11:25.673 "num_base_bdevs_discovered": 1, 00:11:25.673 "num_base_bdevs_operational": 1, 00:11:25.673 "base_bdevs_list": [ 00:11:25.673 { 00:11:25.673 "name": null, 00:11:25.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.673 "is_configured": false, 00:11:25.673 "data_offset": 0, 00:11:25.673 "data_size": 63488 00:11:25.673 }, 00:11:25.673 { 00:11:25.673 "name": "BaseBdev2", 00:11:25.673 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:25.673 "is_configured": true, 00:11:25.673 "data_offset": 2048, 00:11:25.673 "data_size": 63488 00:11:25.673 } 00:11:25.673 ] 00:11:25.673 }' 00:11:25.673 01:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:25.673 01:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:25.673 01:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.673 01:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:25.673 01:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:25.673 01:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.673 01:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.673 01:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.673 01:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:25.673 01:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.673 01:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.673 [2024-10-15 01:12:38.257301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:25.673 [2024-10-15 01:12:38.257364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.673 [2024-10-15 01:12:38.257384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:25.673 [2024-10-15 01:12:38.257395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.673 [2024-10-15 01:12:38.257803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.673 [2024-10-15 01:12:38.257824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:25.673 [2024-10-15 01:12:38.257900] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:25.673 [2024-10-15 01:12:38.257918] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:25.673 [2024-10-15 01:12:38.257926] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:25.673 [2024-10-15 01:12:38.257938] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:25.673 BaseBdev1 00:11:25.673 01:12:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.673 01:12:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:26.612 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:26.612 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.612 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.612 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.612 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.612 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:26.612 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.612 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.612 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.612 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.612 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.612 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.612 01:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.612 01:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.612 01:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.612 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.612 "name": "raid_bdev1", 00:11:26.612 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:26.612 "strip_size_kb": 0, 00:11:26.612 "state": "online", 00:11:26.612 "raid_level": "raid1", 00:11:26.612 "superblock": true, 00:11:26.612 "num_base_bdevs": 2, 00:11:26.612 "num_base_bdevs_discovered": 1, 00:11:26.612 "num_base_bdevs_operational": 1, 00:11:26.612 "base_bdevs_list": [ 00:11:26.612 { 00:11:26.612 "name": null, 00:11:26.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.612 "is_configured": false, 00:11:26.612 "data_offset": 0, 00:11:26.612 "data_size": 63488 00:11:26.612 }, 00:11:26.612 { 00:11:26.612 "name": "BaseBdev2", 00:11:26.612 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:26.612 "is_configured": true, 00:11:26.612 "data_offset": 2048, 00:11:26.612 "data_size": 63488 00:11:26.612 } 00:11:26.612 ] 00:11:26.612 }' 00:11:26.612 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.612 01:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:27.182 "name": "raid_bdev1", 00:11:27.182 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:27.182 "strip_size_kb": 0, 00:11:27.182 "state": "online", 00:11:27.182 "raid_level": "raid1", 00:11:27.182 "superblock": true, 00:11:27.182 "num_base_bdevs": 2, 00:11:27.182 "num_base_bdevs_discovered": 1, 00:11:27.182 "num_base_bdevs_operational": 1, 00:11:27.182 "base_bdevs_list": [ 00:11:27.182 { 00:11:27.182 "name": null, 00:11:27.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.182 "is_configured": false, 00:11:27.182 "data_offset": 0, 00:11:27.182 "data_size": 63488 00:11:27.182 }, 00:11:27.182 { 00:11:27.182 "name": "BaseBdev2", 00:11:27.182 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:27.182 "is_configured": true, 00:11:27.182 "data_offset": 2048, 00:11:27.182 "data_size": 63488 00:11:27.182 } 00:11:27.182 ] 00:11:27.182 }' 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.182 01:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.182 [2024-10-15 01:12:39.898818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.182 [2024-10-15 01:12:39.898999] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:27.182 [2024-10-15 01:12:39.899014] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:27.442 request: 00:11:27.442 { 00:11:27.442 "base_bdev": "BaseBdev1", 00:11:27.442 "raid_bdev": "raid_bdev1", 00:11:27.442 "method": "bdev_raid_add_base_bdev", 00:11:27.442 "req_id": 1 00:11:27.442 } 00:11:27.442 Got JSON-RPC error response 00:11:27.442 response: 00:11:27.442 { 00:11:27.442 "code": -22, 00:11:27.443 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:27.443 } 00:11:27.443 01:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:27.443 01:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:11:27.443 01:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:27.443 01:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:27.443 01:12:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:27.443 01:12:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:28.383 01:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:28.383 01:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.383 01:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.383 01:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.383 01:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.383 01:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:28.383 01:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.383 01:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.383 01:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.383 01:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.383 01:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.383 01:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.383 01:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.383 01:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.383 01:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.383 01:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.383 "name": "raid_bdev1", 00:11:28.383 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:28.383 "strip_size_kb": 0, 00:11:28.383 "state": "online", 00:11:28.383 "raid_level": "raid1", 00:11:28.383 "superblock": true, 00:11:28.383 "num_base_bdevs": 2, 00:11:28.383 "num_base_bdevs_discovered": 1, 00:11:28.383 "num_base_bdevs_operational": 1, 00:11:28.383 "base_bdevs_list": [ 00:11:28.383 { 00:11:28.383 "name": null, 00:11:28.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.383 "is_configured": false, 00:11:28.383 "data_offset": 0, 00:11:28.383 "data_size": 63488 00:11:28.383 }, 00:11:28.383 { 00:11:28.383 "name": "BaseBdev2", 00:11:28.383 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:28.383 "is_configured": true, 00:11:28.383 "data_offset": 2048, 00:11:28.383 "data_size": 63488 00:11:28.383 } 00:11:28.383 ] 00:11:28.383 }' 00:11:28.383 01:12:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.383 01:12:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.742 01:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:28.742 01:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:28.742 01:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:28.742 01:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:28.742 01:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:28.742 01:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.742 01:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.742 01:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.742 01:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.742 01:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.742 01:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:28.742 "name": "raid_bdev1", 00:11:28.742 "uuid": "ff2614a4-09dc-4c20-95d3-9108c781c945", 00:11:28.742 "strip_size_kb": 0, 00:11:28.742 "state": "online", 00:11:28.742 "raid_level": "raid1", 00:11:28.742 "superblock": true, 00:11:28.742 "num_base_bdevs": 2, 00:11:28.742 "num_base_bdevs_discovered": 1, 00:11:28.742 "num_base_bdevs_operational": 1, 00:11:28.742 "base_bdevs_list": [ 00:11:28.742 { 00:11:28.742 "name": null, 00:11:28.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.742 "is_configured": false, 00:11:28.742 "data_offset": 0, 00:11:28.742 "data_size": 63488 00:11:28.742 }, 00:11:28.742 { 00:11:28.742 "name": "BaseBdev2", 00:11:28.742 "uuid": "553210ff-3296-58df-a221-93771f0303b9", 00:11:28.742 "is_configured": true, 00:11:28.742 "data_offset": 2048, 00:11:28.742 "data_size": 63488 00:11:28.742 } 00:11:28.742 ] 00:11:28.742 }' 00:11:28.742 01:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:29.002 01:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:29.002 01:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:29.002 01:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:29.002 01:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86145 00:11:29.002 01:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86145 ']' 00:11:29.002 01:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 86145 00:11:29.002 01:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:29.002 01:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:29.002 01:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86145 00:11:29.002 01:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:29.002 01:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:29.002 killing process with pid 86145 00:11:29.003 01:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86145' 00:11:29.003 01:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 86145 00:11:29.003 Received shutdown signal, test time was about 60.000000 seconds 00:11:29.003 00:11:29.003 Latency(us) 00:11:29.003 [2024-10-15T01:12:41.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:29.003 [2024-10-15T01:12:41.727Z] =================================================================================================================== 00:11:29.003 [2024-10-15T01:12:41.727Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:29.003 [2024-10-15 01:12:41.562829] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.003 [2024-10-15 01:12:41.562956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.003 01:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 86145 00:11:29.003 [2024-10-15 01:12:41.563018] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.003 [2024-10-15 01:12:41.563027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:11:29.003 [2024-10-15 01:12:41.594567] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:11:29.263 00:11:29.263 real 0m21.366s 00:11:29.263 user 0m26.855s 00:11:29.263 sys 0m3.375s 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.263 ************************************ 00:11:29.263 END TEST raid_rebuild_test_sb 00:11:29.263 ************************************ 00:11:29.263 01:12:41 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:11:29.263 01:12:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:29.263 01:12:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:29.263 01:12:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.263 ************************************ 00:11:29.263 START TEST raid_rebuild_test_io 00:11:29.263 ************************************ 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=86855 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 86855 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 86855 ']' 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:29.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:29.263 01:12:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:29.263 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:29.263 Zero copy mechanism will not be used. 00:11:29.263 [2024-10-15 01:12:41.956165] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:11:29.263 [2024-10-15 01:12:41.956327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86855 ] 00:11:29.524 [2024-10-15 01:12:42.089217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.524 [2024-10-15 01:12:42.117664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.524 [2024-10-15 01:12:42.161097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.524 [2024-10-15 01:12:42.161141] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.093 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:30.093 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:11:30.093 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:30.093 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.354 BaseBdev1_malloc 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.354 [2024-10-15 01:12:42.840319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:30.354 [2024-10-15 01:12:42.840380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.354 [2024-10-15 01:12:42.840404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:30.354 [2024-10-15 01:12:42.840416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.354 [2024-10-15 01:12:42.842430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.354 [2024-10-15 01:12:42.842466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:30.354 BaseBdev1 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.354 BaseBdev2_malloc 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.354 [2024-10-15 01:12:42.869106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:30.354 [2024-10-15 01:12:42.869153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.354 [2024-10-15 01:12:42.869173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:30.354 [2024-10-15 01:12:42.869196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.354 [2024-10-15 01:12:42.871324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.354 [2024-10-15 01:12:42.871365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:30.354 BaseBdev2 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.354 spare_malloc 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.354 spare_delay 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.354 [2024-10-15 01:12:42.909828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:30.354 [2024-10-15 01:12:42.909899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.354 [2024-10-15 01:12:42.909920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:30.354 [2024-10-15 01:12:42.909929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.354 [2024-10-15 01:12:42.912015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.354 [2024-10-15 01:12:42.912050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:30.354 spare 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.354 [2024-10-15 01:12:42.921865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.354 [2024-10-15 01:12:42.923711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.354 [2024-10-15 01:12:42.923826] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:30.354 [2024-10-15 01:12:42.923838] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:30.354 [2024-10-15 01:12:42.924124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:30.354 [2024-10-15 01:12:42.924260] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:30.354 [2024-10-15 01:12:42.924278] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:30.354 [2024-10-15 01:12:42.924408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.354 "name": "raid_bdev1", 00:11:30.354 "uuid": "35180ede-99af-46e3-b849-eec9befa8e13", 00:11:30.354 "strip_size_kb": 0, 00:11:30.354 "state": "online", 00:11:30.354 "raid_level": "raid1", 00:11:30.354 "superblock": false, 00:11:30.354 "num_base_bdevs": 2, 00:11:30.354 "num_base_bdevs_discovered": 2, 00:11:30.354 "num_base_bdevs_operational": 2, 00:11:30.354 "base_bdevs_list": [ 00:11:30.354 { 00:11:30.354 "name": "BaseBdev1", 00:11:30.354 "uuid": "243c1784-1fdb-574e-986f-7340cc364c37", 00:11:30.354 "is_configured": true, 00:11:30.354 "data_offset": 0, 00:11:30.354 "data_size": 65536 00:11:30.354 }, 00:11:30.354 { 00:11:30.354 "name": "BaseBdev2", 00:11:30.354 "uuid": "6fd45a65-0532-5f8d-8c70-bb34283516bf", 00:11:30.354 "is_configured": true, 00:11:30.354 "data_offset": 0, 00:11:30.354 "data_size": 65536 00:11:30.354 } 00:11:30.354 ] 00:11:30.354 }' 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.354 01:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.615 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:30.615 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:30.615 01:12:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.615 01:12:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.615 [2024-10-15 01:12:43.329464] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.875 [2024-10-15 01:12:43.405060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.875 "name": "raid_bdev1", 00:11:30.875 "uuid": "35180ede-99af-46e3-b849-eec9befa8e13", 00:11:30.875 "strip_size_kb": 0, 00:11:30.875 "state": "online", 00:11:30.875 "raid_level": "raid1", 00:11:30.875 "superblock": false, 00:11:30.875 "num_base_bdevs": 2, 00:11:30.875 "num_base_bdevs_discovered": 1, 00:11:30.875 "num_base_bdevs_operational": 1, 00:11:30.875 "base_bdevs_list": [ 00:11:30.875 { 00:11:30.875 "name": null, 00:11:30.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.875 "is_configured": false, 00:11:30.875 "data_offset": 0, 00:11:30.875 "data_size": 65536 00:11:30.875 }, 00:11:30.875 { 00:11:30.875 "name": "BaseBdev2", 00:11:30.875 "uuid": "6fd45a65-0532-5f8d-8c70-bb34283516bf", 00:11:30.875 "is_configured": true, 00:11:30.875 "data_offset": 0, 00:11:30.875 "data_size": 65536 00:11:30.875 } 00:11:30.875 ] 00:11:30.875 }' 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.875 01:12:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.875 [2024-10-15 01:12:43.499016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:30.875 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:30.875 Zero copy mechanism will not be used. 00:11:30.875 Running I/O for 60 seconds... 00:11:31.135 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:31.135 01:12:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.135 01:12:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.135 [2024-10-15 01:12:43.849941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:31.395 01:12:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.395 01:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:31.395 [2024-10-15 01:12:43.923269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:31.395 [2024-10-15 01:12:43.925298] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:31.395 [2024-10-15 01:12:44.044345] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:31.395 [2024-10-15 01:12:44.044943] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:31.654 [2024-10-15 01:12:44.252688] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:31.654 [2024-10-15 01:12:44.252998] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:31.913 [2024-10-15 01:12:44.487017] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:32.173 183.00 IOPS, 549.00 MiB/s [2024-10-15T01:12:44.897Z] [2024-10-15 01:12:44.700685] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:32.438 01:12:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:32.438 01:12:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:32.438 01:12:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:32.438 01:12:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:32.438 01:12:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:32.438 01:12:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.438 01:12:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.438 01:12:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.438 01:12:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:32.438 01:12:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.438 [2024-10-15 01:12:44.943786] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:32.438 [2024-10-15 01:12:44.944334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:32.438 01:12:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:32.438 "name": "raid_bdev1", 00:11:32.438 "uuid": "35180ede-99af-46e3-b849-eec9befa8e13", 00:11:32.438 "strip_size_kb": 0, 00:11:32.438 "state": "online", 00:11:32.438 "raid_level": "raid1", 00:11:32.438 "superblock": false, 00:11:32.438 "num_base_bdevs": 2, 00:11:32.438 "num_base_bdevs_discovered": 2, 00:11:32.438 "num_base_bdevs_operational": 2, 00:11:32.438 "process": { 00:11:32.438 "type": "rebuild", 00:11:32.438 "target": "spare", 00:11:32.438 "progress": { 00:11:32.438 "blocks": 12288, 00:11:32.438 "percent": 18 00:11:32.438 } 00:11:32.438 }, 00:11:32.438 "base_bdevs_list": [ 00:11:32.438 { 00:11:32.438 "name": "spare", 00:11:32.438 "uuid": "feae3e96-18fd-5975-8687-edcd97d52a11", 00:11:32.438 "is_configured": true, 00:11:32.438 "data_offset": 0, 00:11:32.438 "data_size": 65536 00:11:32.438 }, 00:11:32.438 { 00:11:32.438 "name": "BaseBdev2", 00:11:32.438 "uuid": "6fd45a65-0532-5f8d-8c70-bb34283516bf", 00:11:32.438 "is_configured": true, 00:11:32.438 "data_offset": 0, 00:11:32.438 "data_size": 65536 00:11:32.438 } 00:11:32.438 ] 00:11:32.438 }' 00:11:32.438 01:12:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:32.438 01:12:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:32.438 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:32.438 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:32.438 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:32.438 01:12:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.438 01:12:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:32.438 [2024-10-15 01:12:45.033514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:32.438 [2024-10-15 01:12:45.052586] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:32.703 [2024-10-15 01:12:45.158301] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:32.703 [2024-10-15 01:12:45.166773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.703 [2024-10-15 01:12:45.166826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:32.703 [2024-10-15 01:12:45.166854] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:32.703 [2024-10-15 01:12:45.179058] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:11:32.703 01:12:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.703 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:32.703 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.703 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.703 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.703 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.703 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:32.703 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.703 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.703 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.703 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.703 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.703 01:12:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.703 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.703 01:12:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:32.703 01:12:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.703 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.703 "name": "raid_bdev1", 00:11:32.703 "uuid": "35180ede-99af-46e3-b849-eec9befa8e13", 00:11:32.703 "strip_size_kb": 0, 00:11:32.703 "state": "online", 00:11:32.703 "raid_level": "raid1", 00:11:32.703 "superblock": false, 00:11:32.703 "num_base_bdevs": 2, 00:11:32.703 "num_base_bdevs_discovered": 1, 00:11:32.703 "num_base_bdevs_operational": 1, 00:11:32.703 "base_bdevs_list": [ 00:11:32.703 { 00:11:32.703 "name": null, 00:11:32.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.703 "is_configured": false, 00:11:32.703 "data_offset": 0, 00:11:32.703 "data_size": 65536 00:11:32.703 }, 00:11:32.703 { 00:11:32.703 "name": "BaseBdev2", 00:11:32.703 "uuid": "6fd45a65-0532-5f8d-8c70-bb34283516bf", 00:11:32.703 "is_configured": true, 00:11:32.703 "data_offset": 0, 00:11:32.703 "data_size": 65536 00:11:32.703 } 00:11:32.703 ] 00:11:32.703 }' 00:11:32.703 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.703 01:12:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:32.963 171.50 IOPS, 514.50 MiB/s [2024-10-15T01:12:45.687Z] 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:32.963 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:32.963 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:32.963 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:32.963 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:32.963 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.963 01:12:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.963 01:12:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:32.963 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.963 01:12:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.963 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:32.963 "name": "raid_bdev1", 00:11:32.963 "uuid": "35180ede-99af-46e3-b849-eec9befa8e13", 00:11:32.963 "strip_size_kb": 0, 00:11:32.963 "state": "online", 00:11:32.963 "raid_level": "raid1", 00:11:32.963 "superblock": false, 00:11:32.963 "num_base_bdevs": 2, 00:11:32.963 "num_base_bdevs_discovered": 1, 00:11:32.963 "num_base_bdevs_operational": 1, 00:11:32.963 "base_bdevs_list": [ 00:11:32.963 { 00:11:32.963 "name": null, 00:11:32.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.963 "is_configured": false, 00:11:32.963 "data_offset": 0, 00:11:32.963 "data_size": 65536 00:11:32.963 }, 00:11:32.963 { 00:11:32.963 "name": "BaseBdev2", 00:11:32.963 "uuid": "6fd45a65-0532-5f8d-8c70-bb34283516bf", 00:11:32.963 "is_configured": true, 00:11:32.963 "data_offset": 0, 00:11:32.964 "data_size": 65536 00:11:32.964 } 00:11:32.964 ] 00:11:32.964 }' 00:11:32.964 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:33.223 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:33.223 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:33.223 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:33.223 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:33.223 01:12:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.223 01:12:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.223 [2024-10-15 01:12:45.767538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:33.223 01:12:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.223 01:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:33.223 [2024-10-15 01:12:45.800373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:11:33.223 [2024-10-15 01:12:45.802298] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:33.223 [2024-10-15 01:12:45.921512] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:33.223 [2024-10-15 01:12:45.922036] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:33.483 [2024-10-15 01:12:46.046301] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:33.483 [2024-10-15 01:12:46.046639] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:33.744 [2024-10-15 01:12:46.371007] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:34.003 168.33 IOPS, 505.00 MiB/s [2024-10-15T01:12:46.727Z] [2024-10-15 01:12:46.585340] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:34.263 "name": "raid_bdev1", 00:11:34.263 "uuid": "35180ede-99af-46e3-b849-eec9befa8e13", 00:11:34.263 "strip_size_kb": 0, 00:11:34.263 "state": "online", 00:11:34.263 "raid_level": "raid1", 00:11:34.263 "superblock": false, 00:11:34.263 "num_base_bdevs": 2, 00:11:34.263 "num_base_bdevs_discovered": 2, 00:11:34.263 "num_base_bdevs_operational": 2, 00:11:34.263 "process": { 00:11:34.263 "type": "rebuild", 00:11:34.263 "target": "spare", 00:11:34.263 "progress": { 00:11:34.263 "blocks": 12288, 00:11:34.263 "percent": 18 00:11:34.263 } 00:11:34.263 }, 00:11:34.263 "base_bdevs_list": [ 00:11:34.263 { 00:11:34.263 "name": "spare", 00:11:34.263 "uuid": "feae3e96-18fd-5975-8687-edcd97d52a11", 00:11:34.263 "is_configured": true, 00:11:34.263 "data_offset": 0, 00:11:34.263 "data_size": 65536 00:11:34.263 }, 00:11:34.263 { 00:11:34.263 "name": "BaseBdev2", 00:11:34.263 "uuid": "6fd45a65-0532-5f8d-8c70-bb34283516bf", 00:11:34.263 "is_configured": true, 00:11:34.263 "data_offset": 0, 00:11:34.263 "data_size": 65536 00:11:34.263 } 00:11:34.263 ] 00:11:34.263 }' 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:34.263 [2024-10-15 01:12:46.926330] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=318 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.263 01:12:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.523 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:34.523 "name": "raid_bdev1", 00:11:34.523 "uuid": "35180ede-99af-46e3-b849-eec9befa8e13", 00:11:34.523 "strip_size_kb": 0, 00:11:34.523 "state": "online", 00:11:34.523 "raid_level": "raid1", 00:11:34.523 "superblock": false, 00:11:34.523 "num_base_bdevs": 2, 00:11:34.523 "num_base_bdevs_discovered": 2, 00:11:34.523 "num_base_bdevs_operational": 2, 00:11:34.523 "process": { 00:11:34.523 "type": "rebuild", 00:11:34.523 "target": "spare", 00:11:34.523 "progress": { 00:11:34.523 "blocks": 14336, 00:11:34.523 "percent": 21 00:11:34.523 } 00:11:34.523 }, 00:11:34.523 "base_bdevs_list": [ 00:11:34.523 { 00:11:34.523 "name": "spare", 00:11:34.523 "uuid": "feae3e96-18fd-5975-8687-edcd97d52a11", 00:11:34.523 "is_configured": true, 00:11:34.523 "data_offset": 0, 00:11:34.523 "data_size": 65536 00:11:34.523 }, 00:11:34.523 { 00:11:34.523 "name": "BaseBdev2", 00:11:34.523 "uuid": "6fd45a65-0532-5f8d-8c70-bb34283516bf", 00:11:34.523 "is_configured": true, 00:11:34.523 "data_offset": 0, 00:11:34.523 "data_size": 65536 00:11:34.523 } 00:11:34.523 ] 00:11:34.523 }' 00:11:34.523 01:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:34.523 01:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:34.523 01:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:34.523 [2024-10-15 01:12:47.054391] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:34.523 01:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:34.523 01:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:34.783 [2024-10-15 01:12:47.269926] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:34.783 [2024-10-15 01:12:47.270559] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:34.783 [2024-10-15 01:12:47.486233] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:35.302 154.75 IOPS, 464.25 MiB/s [2024-10-15T01:12:48.026Z] [2024-10-15 01:12:47.845799] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:35.561 [2024-10-15 01:12:48.075466] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:11:35.561 01:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:35.561 01:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:35.561 01:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:35.561 01:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:35.561 01:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:35.561 01:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:35.561 01:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.561 01:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.561 01:12:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.561 01:12:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.561 01:12:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.561 01:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:35.561 "name": "raid_bdev1", 00:11:35.561 "uuid": "35180ede-99af-46e3-b849-eec9befa8e13", 00:11:35.561 "strip_size_kb": 0, 00:11:35.561 "state": "online", 00:11:35.561 "raid_level": "raid1", 00:11:35.561 "superblock": false, 00:11:35.561 "num_base_bdevs": 2, 00:11:35.561 "num_base_bdevs_discovered": 2, 00:11:35.561 "num_base_bdevs_operational": 2, 00:11:35.561 "process": { 00:11:35.561 "type": "rebuild", 00:11:35.561 "target": "spare", 00:11:35.561 "progress": { 00:11:35.561 "blocks": 32768, 00:11:35.561 "percent": 50 00:11:35.561 } 00:11:35.561 }, 00:11:35.561 "base_bdevs_list": [ 00:11:35.561 { 00:11:35.561 "name": "spare", 00:11:35.561 "uuid": "feae3e96-18fd-5975-8687-edcd97d52a11", 00:11:35.561 "is_configured": true, 00:11:35.561 "data_offset": 0, 00:11:35.561 "data_size": 65536 00:11:35.561 }, 00:11:35.561 { 00:11:35.561 "name": "BaseBdev2", 00:11:35.561 "uuid": "6fd45a65-0532-5f8d-8c70-bb34283516bf", 00:11:35.561 "is_configured": true, 00:11:35.561 "data_offset": 0, 00:11:35.561 "data_size": 65536 00:11:35.561 } 00:11:35.561 ] 00:11:35.561 }' 00:11:35.561 01:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:35.561 01:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:35.561 01:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:35.561 01:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:35.561 01:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:35.820 133.40 IOPS, 400.20 MiB/s [2024-10-15T01:12:48.544Z] [2024-10-15 01:12:48.508244] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:11:36.389 [2024-10-15 01:12:49.036591] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:36.389 [2024-10-15 01:12:49.037018] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:36.648 01:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:36.648 01:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:36.649 01:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:36.649 01:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:36.649 01:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:36.649 01:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:36.649 01:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.649 01:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.649 01:12:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.649 01:12:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.649 [2024-10-15 01:12:49.268104] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:11:36.649 [2024-10-15 01:12:49.268603] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:11:36.649 01:12:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.649 01:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:36.649 "name": "raid_bdev1", 00:11:36.649 "uuid": "35180ede-99af-46e3-b849-eec9befa8e13", 00:11:36.649 "strip_size_kb": 0, 00:11:36.649 "state": "online", 00:11:36.649 "raid_level": "raid1", 00:11:36.649 "superblock": false, 00:11:36.649 "num_base_bdevs": 2, 00:11:36.649 "num_base_bdevs_discovered": 2, 00:11:36.649 "num_base_bdevs_operational": 2, 00:11:36.649 "process": { 00:11:36.649 "type": "rebuild", 00:11:36.649 "target": "spare", 00:11:36.649 "progress": { 00:11:36.649 "blocks": 49152, 00:11:36.649 "percent": 75 00:11:36.649 } 00:11:36.649 }, 00:11:36.649 "base_bdevs_list": [ 00:11:36.649 { 00:11:36.649 "name": "spare", 00:11:36.649 "uuid": "feae3e96-18fd-5975-8687-edcd97d52a11", 00:11:36.649 "is_configured": true, 00:11:36.649 "data_offset": 0, 00:11:36.649 "data_size": 65536 00:11:36.649 }, 00:11:36.649 { 00:11:36.649 "name": "BaseBdev2", 00:11:36.649 "uuid": "6fd45a65-0532-5f8d-8c70-bb34283516bf", 00:11:36.649 "is_configured": true, 00:11:36.649 "data_offset": 0, 00:11:36.649 "data_size": 65536 00:11:36.649 } 00:11:36.649 ] 00:11:36.649 }' 00:11:36.649 01:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:36.649 01:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:36.649 01:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:36.908 01:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:36.908 01:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:37.477 118.00 IOPS, 354.00 MiB/s [2024-10-15T01:12:50.201Z] [2024-10-15 01:12:50.050550] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:37.477 [2024-10-15 01:12:50.150433] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:37.477 [2024-10-15 01:12:50.152398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.737 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:37.737 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:37.737 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:37.737 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:37.737 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:37.737 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:37.737 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.737 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.737 01:12:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.737 01:12:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.737 01:12:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.737 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:37.737 "name": "raid_bdev1", 00:11:37.737 "uuid": "35180ede-99af-46e3-b849-eec9befa8e13", 00:11:37.737 "strip_size_kb": 0, 00:11:37.737 "state": "online", 00:11:37.737 "raid_level": "raid1", 00:11:37.737 "superblock": false, 00:11:37.737 "num_base_bdevs": 2, 00:11:37.737 "num_base_bdevs_discovered": 2, 00:11:37.737 "num_base_bdevs_operational": 2, 00:11:37.737 "base_bdevs_list": [ 00:11:37.737 { 00:11:37.737 "name": "spare", 00:11:37.737 "uuid": "feae3e96-18fd-5975-8687-edcd97d52a11", 00:11:37.737 "is_configured": true, 00:11:37.737 "data_offset": 0, 00:11:37.737 "data_size": 65536 00:11:37.737 }, 00:11:37.737 { 00:11:37.737 "name": "BaseBdev2", 00:11:37.737 "uuid": "6fd45a65-0532-5f8d-8c70-bb34283516bf", 00:11:37.737 "is_configured": true, 00:11:37.737 "data_offset": 0, 00:11:37.737 "data_size": 65536 00:11:37.737 } 00:11:37.737 ] 00:11:37.737 }' 00:11:37.737 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:37.997 106.14 IOPS, 318.43 MiB/s [2024-10-15T01:12:50.721Z] 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:37.997 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:37.997 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:37.997 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:11:37.997 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:37.997 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:37.997 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:37.997 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:37.997 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:37.997 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.997 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.997 01:12:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.997 01:12:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.997 01:12:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.997 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:37.997 "name": "raid_bdev1", 00:11:37.997 "uuid": "35180ede-99af-46e3-b849-eec9befa8e13", 00:11:37.997 "strip_size_kb": 0, 00:11:37.997 "state": "online", 00:11:37.997 "raid_level": "raid1", 00:11:37.997 "superblock": false, 00:11:37.997 "num_base_bdevs": 2, 00:11:37.997 "num_base_bdevs_discovered": 2, 00:11:37.997 "num_base_bdevs_operational": 2, 00:11:37.997 "base_bdevs_list": [ 00:11:37.997 { 00:11:37.997 "name": "spare", 00:11:37.997 "uuid": "feae3e96-18fd-5975-8687-edcd97d52a11", 00:11:37.997 "is_configured": true, 00:11:37.997 "data_offset": 0, 00:11:37.997 "data_size": 65536 00:11:37.997 }, 00:11:37.997 { 00:11:37.997 "name": "BaseBdev2", 00:11:37.997 "uuid": "6fd45a65-0532-5f8d-8c70-bb34283516bf", 00:11:37.997 "is_configured": true, 00:11:37.997 "data_offset": 0, 00:11:37.998 "data_size": 65536 00:11:37.998 } 00:11:37.998 ] 00:11:37.998 }' 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.998 "name": "raid_bdev1", 00:11:37.998 "uuid": "35180ede-99af-46e3-b849-eec9befa8e13", 00:11:37.998 "strip_size_kb": 0, 00:11:37.998 "state": "online", 00:11:37.998 "raid_level": "raid1", 00:11:37.998 "superblock": false, 00:11:37.998 "num_base_bdevs": 2, 00:11:37.998 "num_base_bdevs_discovered": 2, 00:11:37.998 "num_base_bdevs_operational": 2, 00:11:37.998 "base_bdevs_list": [ 00:11:37.998 { 00:11:37.998 "name": "spare", 00:11:37.998 "uuid": "feae3e96-18fd-5975-8687-edcd97d52a11", 00:11:37.998 "is_configured": true, 00:11:37.998 "data_offset": 0, 00:11:37.998 "data_size": 65536 00:11:37.998 }, 00:11:37.998 { 00:11:37.998 "name": "BaseBdev2", 00:11:37.998 "uuid": "6fd45a65-0532-5f8d-8c70-bb34283516bf", 00:11:37.998 "is_configured": true, 00:11:37.998 "data_offset": 0, 00:11:37.998 "data_size": 65536 00:11:37.998 } 00:11:37.998 ] 00:11:37.998 }' 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.998 01:12:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.568 [2024-10-15 01:12:51.079619] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:38.568 [2024-10-15 01:12:51.079720] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.568 00:11:38.568 Latency(us) 00:11:38.568 [2024-10-15T01:12:51.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:38.568 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:38.568 raid_bdev1 : 7.63 99.88 299.65 0.00 0.00 13431.65 279.03 114473.36 00:11:38.568 [2024-10-15T01:12:51.292Z] =================================================================================================================== 00:11:38.568 [2024-10-15T01:12:51.292Z] Total : 99.88 299.65 0.00 0.00 13431.65 279.03 114473.36 00:11:38.568 { 00:11:38.568 "results": [ 00:11:38.568 { 00:11:38.568 "job": "raid_bdev1", 00:11:38.568 "core_mask": "0x1", 00:11:38.568 "workload": "randrw", 00:11:38.568 "percentage": 50, 00:11:38.568 "status": "finished", 00:11:38.568 "queue_depth": 2, 00:11:38.568 "io_size": 3145728, 00:11:38.568 "runtime": 7.628878, 00:11:38.568 "iops": 99.88362639958326, 00:11:38.568 "mibps": 299.6508791987498, 00:11:38.568 "io_failed": 0, 00:11:38.568 "io_timeout": 0, 00:11:38.568 "avg_latency_us": 13431.65151692283, 00:11:38.568 "min_latency_us": 279.0288209606987, 00:11:38.568 "max_latency_us": 114473.36244541485 00:11:38.568 } 00:11:38.568 ], 00:11:38.568 "core_count": 1 00:11:38.568 } 00:11:38.568 [2024-10-15 01:12:51.119364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.568 [2024-10-15 01:12:51.119409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.568 [2024-10-15 01:12:51.119492] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.568 [2024-10-15 01:12:51.119504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:38.568 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:38.827 /dev/nbd0 00:11:38.827 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:38.827 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:38.827 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:38.827 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:11:38.827 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:38.827 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:38.827 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:38.827 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:11:38.827 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:38.827 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:38.827 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:38.827 1+0 records in 00:11:38.827 1+0 records out 00:11:38.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553553 s, 7.4 MB/s 00:11:38.827 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.827 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:11:38.827 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.827 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:38.827 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:11:38.827 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:38.828 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:38.828 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:38.828 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:38.828 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:38.828 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:38.828 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:38.828 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:38.828 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:38.828 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:38.828 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:38.828 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:38.828 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:38.828 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:39.086 /dev/nbd1 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:39.086 1+0 records in 00:11:39.086 1+0 records out 00:11:39.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474558 s, 8.6 MB/s 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:39.086 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:39.346 01:12:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:39.346 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:39.346 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:39.346 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:39.346 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:39.346 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:39.346 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:39.346 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:39.346 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:39.346 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:39.346 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:39.346 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:39.346 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:39.346 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:39.346 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:39.606 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:39.606 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:39.606 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:39.606 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:39.606 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:39.606 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:39.606 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:39.606 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:39.606 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:39.606 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 86855 00:11:39.606 01:12:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 86855 ']' 00:11:39.606 01:12:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 86855 00:11:39.606 01:12:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:11:39.606 01:12:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:39.606 01:12:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86855 00:11:39.606 01:12:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:39.606 01:12:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:39.606 01:12:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86855' 00:11:39.606 killing process with pid 86855 00:11:39.606 Received shutdown signal, test time was about 8.779835 seconds 00:11:39.606 00:11:39.606 Latency(us) 00:11:39.606 [2024-10-15T01:12:52.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.606 [2024-10-15T01:12:52.330Z] =================================================================================================================== 00:11:39.606 [2024-10-15T01:12:52.330Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:39.606 01:12:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 86855 00:11:39.606 [2024-10-15 01:12:52.264245] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:39.606 01:12:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 86855 00:11:39.606 [2024-10-15 01:12:52.290586] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:39.868 00:11:39.868 real 0m10.628s 00:11:39.868 user 0m13.756s 00:11:39.868 sys 0m1.397s 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.868 ************************************ 00:11:39.868 END TEST raid_rebuild_test_io 00:11:39.868 ************************************ 00:11:39.868 01:12:52 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:11:39.868 01:12:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:39.868 01:12:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:39.868 01:12:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:39.868 ************************************ 00:11:39.868 START TEST raid_rebuild_test_sb_io 00:11:39.868 ************************************ 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87225 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87225 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 87225 ']' 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:39.868 01:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.127 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:40.127 Zero copy mechanism will not be used. 00:11:40.127 [2024-10-15 01:12:52.654120] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:11:40.127 [2024-10-15 01:12:52.654263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87225 ] 00:11:40.127 [2024-10-15 01:12:52.801324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.127 [2024-10-15 01:12:52.831867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.387 [2024-10-15 01:12:52.876106] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.387 [2024-10-15 01:12:52.876140] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.955 BaseBdev1_malloc 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.955 [2024-10-15 01:12:53.516429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:40.955 [2024-10-15 01:12:53.516484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.955 [2024-10-15 01:12:53.516529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:40.955 [2024-10-15 01:12:53.516548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.955 [2024-10-15 01:12:53.518894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.955 [2024-10-15 01:12:53.518927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:40.955 BaseBdev1 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.955 BaseBdev2_malloc 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.955 [2024-10-15 01:12:53.537421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:40.955 [2024-10-15 01:12:53.537468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.955 [2024-10-15 01:12:53.537491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:40.955 [2024-10-15 01:12:53.537503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.955 [2024-10-15 01:12:53.539922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.955 [2024-10-15 01:12:53.539959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:40.955 BaseBdev2 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.955 spare_malloc 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.955 spare_delay 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.955 [2024-10-15 01:12:53.566322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:40.955 [2024-10-15 01:12:53.566373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.955 [2024-10-15 01:12:53.566396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:40.955 [2024-10-15 01:12:53.566406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.955 [2024-10-15 01:12:53.568748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.955 [2024-10-15 01:12:53.568780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:40.955 spare 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.955 [2024-10-15 01:12:53.574358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:40.955 [2024-10-15 01:12:53.576451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.955 [2024-10-15 01:12:53.576612] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:40.955 [2024-10-15 01:12:53.576629] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:40.955 [2024-10-15 01:12:53.576923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:40.955 [2024-10-15 01:12:53.577072] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:40.955 [2024-10-15 01:12:53.577093] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:40.955 [2024-10-15 01:12:53.577244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.955 "name": "raid_bdev1", 00:11:40.955 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:40.955 "strip_size_kb": 0, 00:11:40.955 "state": "online", 00:11:40.955 "raid_level": "raid1", 00:11:40.955 "superblock": true, 00:11:40.955 "num_base_bdevs": 2, 00:11:40.955 "num_base_bdevs_discovered": 2, 00:11:40.955 "num_base_bdevs_operational": 2, 00:11:40.955 "base_bdevs_list": [ 00:11:40.955 { 00:11:40.955 "name": "BaseBdev1", 00:11:40.955 "uuid": "2222daf5-265f-5999-b9d4-6d0a7e8c6cb7", 00:11:40.955 "is_configured": true, 00:11:40.955 "data_offset": 2048, 00:11:40.955 "data_size": 63488 00:11:40.955 }, 00:11:40.955 { 00:11:40.955 "name": "BaseBdev2", 00:11:40.955 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:40.955 "is_configured": true, 00:11:40.955 "data_offset": 2048, 00:11:40.955 "data_size": 63488 00:11:40.955 } 00:11:40.955 ] 00:11:40.955 }' 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.955 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.214 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:41.214 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:41.214 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.214 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.473 [2024-10-15 01:12:53.937996] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.473 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.473 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:41.473 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:41.473 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.473 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.473 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.473 01:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:41.473 [2024-10-15 01:12:54.013580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.473 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.473 "name": "raid_bdev1", 00:11:41.474 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:41.474 "strip_size_kb": 0, 00:11:41.474 "state": "online", 00:11:41.474 "raid_level": "raid1", 00:11:41.474 "superblock": true, 00:11:41.474 "num_base_bdevs": 2, 00:11:41.474 "num_base_bdevs_discovered": 1, 00:11:41.474 "num_base_bdevs_operational": 1, 00:11:41.474 "base_bdevs_list": [ 00:11:41.474 { 00:11:41.474 "name": null, 00:11:41.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.474 "is_configured": false, 00:11:41.474 "data_offset": 0, 00:11:41.474 "data_size": 63488 00:11:41.474 }, 00:11:41.474 { 00:11:41.474 "name": "BaseBdev2", 00:11:41.474 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:41.474 "is_configured": true, 00:11:41.474 "data_offset": 2048, 00:11:41.474 "data_size": 63488 00:11:41.474 } 00:11:41.474 ] 00:11:41.474 }' 00:11:41.474 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.474 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.474 [2024-10-15 01:12:54.111479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:41.474 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:41.474 Zero copy mechanism will not be used. 00:11:41.474 Running I/O for 60 seconds... 00:11:42.042 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:42.042 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.042 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.042 [2024-10-15 01:12:54.465522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:42.042 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.042 01:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:42.042 [2024-10-15 01:12:54.508455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:42.042 [2024-10-15 01:12:54.510607] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:42.042 [2024-10-15 01:12:54.624025] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:42.042 [2024-10-15 01:12:54.624497] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:42.302 [2024-10-15 01:12:54.832993] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:42.302 [2024-10-15 01:12:54.833301] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:42.561 181.00 IOPS, 543.00 MiB/s [2024-10-15T01:12:55.285Z] [2024-10-15 01:12:55.275278] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:42.561 [2024-10-15 01:12:55.275533] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:42.834 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:42.834 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:42.834 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:42.834 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:42.834 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:42.834 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.834 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.834 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.834 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.834 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.113 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:43.113 "name": "raid_bdev1", 00:11:43.113 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:43.113 "strip_size_kb": 0, 00:11:43.113 "state": "online", 00:11:43.113 "raid_level": "raid1", 00:11:43.113 "superblock": true, 00:11:43.113 "num_base_bdevs": 2, 00:11:43.113 "num_base_bdevs_discovered": 2, 00:11:43.113 "num_base_bdevs_operational": 2, 00:11:43.113 "process": { 00:11:43.113 "type": "rebuild", 00:11:43.113 "target": "spare", 00:11:43.113 "progress": { 00:11:43.113 "blocks": 12288, 00:11:43.113 "percent": 19 00:11:43.113 } 00:11:43.113 }, 00:11:43.113 "base_bdevs_list": [ 00:11:43.113 { 00:11:43.113 "name": "spare", 00:11:43.113 "uuid": "e8a45fb8-77a1-574d-aa21-54325397d2e8", 00:11:43.113 "is_configured": true, 00:11:43.113 "data_offset": 2048, 00:11:43.113 "data_size": 63488 00:11:43.113 }, 00:11:43.113 { 00:11:43.113 "name": "BaseBdev2", 00:11:43.113 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:43.113 "is_configured": true, 00:11:43.113 "data_offset": 2048, 00:11:43.113 "data_size": 63488 00:11:43.113 } 00:11:43.113 ] 00:11:43.113 }' 00:11:43.113 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:43.113 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:43.113 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:43.113 [2024-10-15 01:12:55.613830] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:43.113 [2024-10-15 01:12:55.614378] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:43.113 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:43.113 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:43.113 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.113 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.113 [2024-10-15 01:12:55.642638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:43.113 [2024-10-15 01:12:55.827568] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:43.113 [2024-10-15 01:12:55.835824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.113 [2024-10-15 01:12:55.835875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:43.113 [2024-10-15 01:12:55.835895] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:43.373 [2024-10-15 01:12:55.859481] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:11:43.373 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.373 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:43.373 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.373 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.373 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.373 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.373 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:43.373 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.373 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.373 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.373 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.373 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.373 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.373 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.373 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.373 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.373 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.373 "name": "raid_bdev1", 00:11:43.373 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:43.373 "strip_size_kb": 0, 00:11:43.373 "state": "online", 00:11:43.373 "raid_level": "raid1", 00:11:43.373 "superblock": true, 00:11:43.373 "num_base_bdevs": 2, 00:11:43.373 "num_base_bdevs_discovered": 1, 00:11:43.373 "num_base_bdevs_operational": 1, 00:11:43.373 "base_bdevs_list": [ 00:11:43.373 { 00:11:43.373 "name": null, 00:11:43.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.373 "is_configured": false, 00:11:43.373 "data_offset": 0, 00:11:43.373 "data_size": 63488 00:11:43.373 }, 00:11:43.373 { 00:11:43.373 "name": "BaseBdev2", 00:11:43.373 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:43.373 "is_configured": true, 00:11:43.373 "data_offset": 2048, 00:11:43.373 "data_size": 63488 00:11:43.373 } 00:11:43.373 ] 00:11:43.373 }' 00:11:43.373 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.373 01:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.894 159.00 IOPS, 477.00 MiB/s [2024-10-15T01:12:56.618Z] 01:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:43.894 01:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:43.894 01:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:43.894 01:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:43.894 01:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:43.894 01:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.894 01:12:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.894 01:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.894 01:12:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.894 01:12:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.894 01:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:43.894 "name": "raid_bdev1", 00:11:43.894 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:43.894 "strip_size_kb": 0, 00:11:43.894 "state": "online", 00:11:43.894 "raid_level": "raid1", 00:11:43.894 "superblock": true, 00:11:43.894 "num_base_bdevs": 2, 00:11:43.894 "num_base_bdevs_discovered": 1, 00:11:43.894 "num_base_bdevs_operational": 1, 00:11:43.894 "base_bdevs_list": [ 00:11:43.894 { 00:11:43.894 "name": null, 00:11:43.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.894 "is_configured": false, 00:11:43.894 "data_offset": 0, 00:11:43.894 "data_size": 63488 00:11:43.894 }, 00:11:43.894 { 00:11:43.894 "name": "BaseBdev2", 00:11:43.894 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:43.894 "is_configured": true, 00:11:43.894 "data_offset": 2048, 00:11:43.894 "data_size": 63488 00:11:43.894 } 00:11:43.894 ] 00:11:43.894 }' 00:11:43.894 01:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:43.894 01:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:43.894 01:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:43.894 01:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:43.894 01:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:43.894 01:12:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.894 01:12:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.894 [2024-10-15 01:12:56.521549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:43.894 01:12:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.894 01:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:43.894 [2024-10-15 01:12:56.565754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:11:43.894 [2024-10-15 01:12:56.567766] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:44.154 [2024-10-15 01:12:56.681294] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:44.154 [2024-10-15 01:12:56.681837] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:44.415 [2024-10-15 01:12:56.883371] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:44.415 [2024-10-15 01:12:56.883677] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:44.415 171.67 IOPS, 515.00 MiB/s [2024-10-15T01:12:57.139Z] [2024-10-15 01:12:57.117998] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:44.415 [2024-10-15 01:12:57.118563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:44.674 [2024-10-15 01:12:57.346198] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:44.674 [2024-10-15 01:12:57.346479] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:44.935 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:44.935 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:44.935 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:44.935 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:44.935 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:44.935 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.935 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.935 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.935 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.935 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.935 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:44.935 "name": "raid_bdev1", 00:11:44.935 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:44.935 "strip_size_kb": 0, 00:11:44.935 "state": "online", 00:11:44.935 "raid_level": "raid1", 00:11:44.935 "superblock": true, 00:11:44.935 "num_base_bdevs": 2, 00:11:44.935 "num_base_bdevs_discovered": 2, 00:11:44.935 "num_base_bdevs_operational": 2, 00:11:44.935 "process": { 00:11:44.935 "type": "rebuild", 00:11:44.935 "target": "spare", 00:11:44.935 "progress": { 00:11:44.935 "blocks": 12288, 00:11:44.935 "percent": 19 00:11:44.935 } 00:11:44.935 }, 00:11:44.935 "base_bdevs_list": [ 00:11:44.935 { 00:11:44.935 "name": "spare", 00:11:44.935 "uuid": "e8a45fb8-77a1-574d-aa21-54325397d2e8", 00:11:44.935 "is_configured": true, 00:11:44.935 "data_offset": 2048, 00:11:44.935 "data_size": 63488 00:11:44.935 }, 00:11:44.935 { 00:11:44.935 "name": "BaseBdev2", 00:11:44.935 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:44.935 "is_configured": true, 00:11:44.935 "data_offset": 2048, 00:11:44.935 "data_size": 63488 00:11:44.935 } 00:11:44.935 ] 00:11:44.935 }' 00:11:44.935 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:44.935 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:44.935 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:45.195 [2024-10-15 01:12:57.680973] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:45.195 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=329 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:45.195 "name": "raid_bdev1", 00:11:45.195 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:45.195 "strip_size_kb": 0, 00:11:45.195 "state": "online", 00:11:45.195 "raid_level": "raid1", 00:11:45.195 "superblock": true, 00:11:45.195 "num_base_bdevs": 2, 00:11:45.195 "num_base_bdevs_discovered": 2, 00:11:45.195 "num_base_bdevs_operational": 2, 00:11:45.195 "process": { 00:11:45.195 "type": "rebuild", 00:11:45.195 "target": "spare", 00:11:45.195 "progress": { 00:11:45.195 "blocks": 14336, 00:11:45.195 "percent": 22 00:11:45.195 } 00:11:45.195 }, 00:11:45.195 "base_bdevs_list": [ 00:11:45.195 { 00:11:45.195 "name": "spare", 00:11:45.195 "uuid": "e8a45fb8-77a1-574d-aa21-54325397d2e8", 00:11:45.195 "is_configured": true, 00:11:45.195 "data_offset": 2048, 00:11:45.195 "data_size": 63488 00:11:45.195 }, 00:11:45.195 { 00:11:45.195 "name": "BaseBdev2", 00:11:45.195 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:45.195 "is_configured": true, 00:11:45.195 "data_offset": 2048, 00:11:45.195 "data_size": 63488 00:11:45.195 } 00:11:45.195 ] 00:11:45.195 }' 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:45.195 01:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:45.195 [2024-10-15 01:12:57.902362] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:45.713 141.25 IOPS, 423.75 MiB/s [2024-10-15T01:12:58.437Z] [2024-10-15 01:12:58.255046] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:45.974 [2024-10-15 01:12:58.463843] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:45.974 [2024-10-15 01:12:58.464138] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:46.234 [2024-10-15 01:12:58.799086] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:46.234 01:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:46.234 01:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:46.234 01:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:46.234 01:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:46.234 01:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:46.234 01:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:46.234 01:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.234 01:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.234 01:12:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.234 01:12:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.234 01:12:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.234 01:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:46.234 "name": "raid_bdev1", 00:11:46.234 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:46.234 "strip_size_kb": 0, 00:11:46.234 "state": "online", 00:11:46.234 "raid_level": "raid1", 00:11:46.234 "superblock": true, 00:11:46.234 "num_base_bdevs": 2, 00:11:46.234 "num_base_bdevs_discovered": 2, 00:11:46.234 "num_base_bdevs_operational": 2, 00:11:46.234 "process": { 00:11:46.234 "type": "rebuild", 00:11:46.234 "target": "spare", 00:11:46.234 "progress": { 00:11:46.234 "blocks": 26624, 00:11:46.234 "percent": 41 00:11:46.234 } 00:11:46.234 }, 00:11:46.234 "base_bdevs_list": [ 00:11:46.234 { 00:11:46.234 "name": "spare", 00:11:46.234 "uuid": "e8a45fb8-77a1-574d-aa21-54325397d2e8", 00:11:46.234 "is_configured": true, 00:11:46.234 "data_offset": 2048, 00:11:46.234 "data_size": 63488 00:11:46.234 }, 00:11:46.234 { 00:11:46.234 "name": "BaseBdev2", 00:11:46.234 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:46.234 "is_configured": true, 00:11:46.234 "data_offset": 2048, 00:11:46.234 "data_size": 63488 00:11:46.234 } 00:11:46.234 ] 00:11:46.234 }' 00:11:46.234 01:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:46.234 01:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:46.234 01:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:46.494 01:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:46.494 01:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:47.432 126.60 IOPS, 379.80 MiB/s [2024-10-15T01:13:00.156Z] [2024-10-15 01:12:59.947543] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:47.432 01:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:47.432 01:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:47.432 01:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:47.432 01:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:47.432 01:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:47.432 01:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:47.432 01:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.432 01:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.432 01:12:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.432 01:12:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.432 01:12:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.432 01:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:47.432 "name": "raid_bdev1", 00:11:47.432 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:47.432 "strip_size_kb": 0, 00:11:47.432 "state": "online", 00:11:47.432 "raid_level": "raid1", 00:11:47.432 "superblock": true, 00:11:47.432 "num_base_bdevs": 2, 00:11:47.432 "num_base_bdevs_discovered": 2, 00:11:47.432 "num_base_bdevs_operational": 2, 00:11:47.432 "process": { 00:11:47.432 "type": "rebuild", 00:11:47.432 "target": "spare", 00:11:47.432 "progress": { 00:11:47.433 "blocks": 45056, 00:11:47.433 "percent": 70 00:11:47.433 } 00:11:47.433 }, 00:11:47.433 "base_bdevs_list": [ 00:11:47.433 { 00:11:47.433 "name": "spare", 00:11:47.433 "uuid": "e8a45fb8-77a1-574d-aa21-54325397d2e8", 00:11:47.433 "is_configured": true, 00:11:47.433 "data_offset": 2048, 00:11:47.433 "data_size": 63488 00:11:47.433 }, 00:11:47.433 { 00:11:47.433 "name": "BaseBdev2", 00:11:47.433 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:47.433 "is_configured": true, 00:11:47.433 "data_offset": 2048, 00:11:47.433 "data_size": 63488 00:11:47.433 } 00:11:47.433 ] 00:11:47.433 }' 00:11:47.433 01:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:47.433 01:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:47.433 01:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:47.433 [2024-10-15 01:13:00.056688] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:47.433 01:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:47.433 01:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:47.692 114.83 IOPS, 344.50 MiB/s [2024-10-15T01:13:00.416Z] [2024-10-15 01:13:00.265850] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:11:48.632 [2024-10-15 01:13:01.014823] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:48.632 01:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:48.632 01:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:48.632 01:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.632 01:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:48.632 01:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:48.632 01:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:48.632 01:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.632 01:13:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.632 01:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.632 01:13:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.632 103.14 IOPS, 309.43 MiB/s [2024-10-15T01:13:01.356Z] [2024-10-15 01:13:01.120256] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:48.632 [2024-10-15 01:13:01.122926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.632 01:13:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.632 01:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:48.632 "name": "raid_bdev1", 00:11:48.632 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:48.632 "strip_size_kb": 0, 00:11:48.632 "state": "online", 00:11:48.632 "raid_level": "raid1", 00:11:48.632 "superblock": true, 00:11:48.632 "num_base_bdevs": 2, 00:11:48.632 "num_base_bdevs_discovered": 2, 00:11:48.632 "num_base_bdevs_operational": 2, 00:11:48.632 "process": { 00:11:48.632 "type": "rebuild", 00:11:48.632 "target": "spare", 00:11:48.632 "progress": { 00:11:48.632 "blocks": 63488, 00:11:48.632 "percent": 100 00:11:48.632 } 00:11:48.632 }, 00:11:48.632 "base_bdevs_list": [ 00:11:48.632 { 00:11:48.632 "name": "spare", 00:11:48.632 "uuid": "e8a45fb8-77a1-574d-aa21-54325397d2e8", 00:11:48.632 "is_configured": true, 00:11:48.632 "data_offset": 2048, 00:11:48.632 "data_size": 63488 00:11:48.632 }, 00:11:48.632 { 00:11:48.632 "name": "BaseBdev2", 00:11:48.632 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:48.632 "is_configured": true, 00:11:48.632 "data_offset": 2048, 00:11:48.632 "data_size": 63488 00:11:48.632 } 00:11:48.632 ] 00:11:48.632 }' 00:11:48.632 01:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:48.632 01:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:48.632 01:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:48.632 01:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:48.632 01:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:49.570 94.50 IOPS, 283.50 MiB/s [2024-10-15T01:13:02.294Z] 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:49.570 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:49.570 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:49.570 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:49.570 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:49.570 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:49.570 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.570 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.570 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.570 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:49.570 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.570 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:49.570 "name": "raid_bdev1", 00:11:49.570 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:49.570 "strip_size_kb": 0, 00:11:49.570 "state": "online", 00:11:49.570 "raid_level": "raid1", 00:11:49.570 "superblock": true, 00:11:49.570 "num_base_bdevs": 2, 00:11:49.570 "num_base_bdevs_discovered": 2, 00:11:49.570 "num_base_bdevs_operational": 2, 00:11:49.570 "base_bdevs_list": [ 00:11:49.570 { 00:11:49.570 "name": "spare", 00:11:49.570 "uuid": "e8a45fb8-77a1-574d-aa21-54325397d2e8", 00:11:49.570 "is_configured": true, 00:11:49.570 "data_offset": 2048, 00:11:49.570 "data_size": 63488 00:11:49.570 }, 00:11:49.570 { 00:11:49.570 "name": "BaseBdev2", 00:11:49.570 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:49.570 "is_configured": true, 00:11:49.570 "data_offset": 2048, 00:11:49.570 "data_size": 63488 00:11:49.570 } 00:11:49.570 ] 00:11:49.570 }' 00:11:49.570 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:49.829 "name": "raid_bdev1", 00:11:49.829 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:49.829 "strip_size_kb": 0, 00:11:49.829 "state": "online", 00:11:49.829 "raid_level": "raid1", 00:11:49.829 "superblock": true, 00:11:49.829 "num_base_bdevs": 2, 00:11:49.829 "num_base_bdevs_discovered": 2, 00:11:49.829 "num_base_bdevs_operational": 2, 00:11:49.829 "base_bdevs_list": [ 00:11:49.829 { 00:11:49.829 "name": "spare", 00:11:49.829 "uuid": "e8a45fb8-77a1-574d-aa21-54325397d2e8", 00:11:49.829 "is_configured": true, 00:11:49.829 "data_offset": 2048, 00:11:49.829 "data_size": 63488 00:11:49.829 }, 00:11:49.829 { 00:11:49.829 "name": "BaseBdev2", 00:11:49.829 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:49.829 "is_configured": true, 00:11:49.829 "data_offset": 2048, 00:11:49.829 "data_size": 63488 00:11:49.829 } 00:11:49.829 ] 00:11:49.829 }' 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.829 "name": "raid_bdev1", 00:11:49.829 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:49.829 "strip_size_kb": 0, 00:11:49.829 "state": "online", 00:11:49.829 "raid_level": "raid1", 00:11:49.829 "superblock": true, 00:11:49.829 "num_base_bdevs": 2, 00:11:49.829 "num_base_bdevs_discovered": 2, 00:11:49.829 "num_base_bdevs_operational": 2, 00:11:49.829 "base_bdevs_list": [ 00:11:49.829 { 00:11:49.829 "name": "spare", 00:11:49.829 "uuid": "e8a45fb8-77a1-574d-aa21-54325397d2e8", 00:11:49.829 "is_configured": true, 00:11:49.829 "data_offset": 2048, 00:11:49.829 "data_size": 63488 00:11:49.829 }, 00:11:49.829 { 00:11:49.829 "name": "BaseBdev2", 00:11:49.829 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:49.829 "is_configured": true, 00:11:49.829 "data_offset": 2048, 00:11:49.829 "data_size": 63488 00:11:49.829 } 00:11:49.829 ] 00:11:49.829 }' 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.829 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.398 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:50.398 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.398 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.398 [2024-10-15 01:13:02.930120] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:50.398 [2024-10-15 01:13:02.930235] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:50.398 00:11:50.398 Latency(us) 00:11:50.398 [2024-10-15T01:13:03.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.398 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:50.398 raid_bdev1 : 8.85 89.16 267.47 0.00 0.00 15709.05 284.39 108978.64 00:11:50.398 [2024-10-15T01:13:03.122Z] =================================================================================================================== 00:11:50.398 [2024-10-15T01:13:03.122Z] Total : 89.16 267.47 0.00 0.00 15709.05 284.39 108978.64 00:11:50.398 { 00:11:50.398 "results": [ 00:11:50.398 { 00:11:50.398 "job": "raid_bdev1", 00:11:50.398 "core_mask": "0x1", 00:11:50.398 "workload": "randrw", 00:11:50.398 "percentage": 50, 00:11:50.398 "status": "finished", 00:11:50.398 "queue_depth": 2, 00:11:50.398 "io_size": 3145728, 00:11:50.398 "runtime": 8.849504, 00:11:50.398 "iops": 89.15753922479723, 00:11:50.398 "mibps": 267.4726176743917, 00:11:50.398 "io_failed": 0, 00:11:50.398 "io_timeout": 0, 00:11:50.398 "avg_latency_us": 15709.053529701518, 00:11:50.398 "min_latency_us": 284.3947598253275, 00:11:50.398 "max_latency_us": 108978.64104803493 00:11:50.398 } 00:11:50.398 ], 00:11:50.398 "core_count": 1 00:11:50.398 } 00:11:50.398 [2024-10-15 01:13:02.949490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.398 [2024-10-15 01:13:02.949527] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:50.398 [2024-10-15 01:13:02.949611] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:50.398 [2024-10-15 01:13:02.949624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:50.398 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.398 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.398 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.398 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.398 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:50.398 01:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.398 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:50.398 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:50.398 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:50.398 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:50.398 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:50.398 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:50.398 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:50.398 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:50.398 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:50.398 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:50.398 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:50.398 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:50.398 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:50.658 /dev/nbd0 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:50.658 1+0 records in 00:11:50.658 1+0 records out 00:11:50.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250388 s, 16.4 MB/s 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:50.658 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:50.918 /dev/nbd1 00:11:50.918 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:50.918 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:50.918 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:50.918 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:11:50.918 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:50.918 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:50.918 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:50.918 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:11:50.918 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:50.918 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:50.918 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:50.918 1+0 records in 00:11:50.918 1+0 records out 00:11:50.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290573 s, 14.1 MB/s 00:11:50.918 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:50.918 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:11:50.918 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:50.918 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:50.918 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:11:50.918 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:50.919 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:50.919 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:50.919 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:50.919 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:50.919 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:50.919 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:50.919 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:50.919 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.919 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:51.178 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:51.178 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:51.178 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:51.178 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.178 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.178 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:51.178 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:51.178 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.178 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:51.178 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:51.178 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:51.178 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:51.178 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:51.178 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.178 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:51.438 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:51.438 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:51.438 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:51.438 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.438 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.438 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:51.438 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:51.438 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.438 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:51.438 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:51.438 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.438 01:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.438 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.438 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:51.438 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.438 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.438 [2024-10-15 01:13:04.008791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:51.438 [2024-10-15 01:13:04.008882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.438 [2024-10-15 01:13:04.008924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:51.438 [2024-10-15 01:13:04.008951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.438 [2024-10-15 01:13:04.011353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.438 [2024-10-15 01:13:04.011423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:51.438 [2024-10-15 01:13:04.011534] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:51.438 [2024-10-15 01:13:04.011603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:51.438 [2024-10-15 01:13:04.011775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.438 spare 00:11:51.439 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.439 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:51.439 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.439 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.439 [2024-10-15 01:13:04.111715] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:11:51.439 [2024-10-15 01:13:04.111801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:51.439 [2024-10-15 01:13:04.112116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027720 00:11:51.439 [2024-10-15 01:13:04.112322] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:11:51.439 [2024-10-15 01:13:04.112367] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:11:51.439 [2024-10-15 01:13:04.112546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.439 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.439 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:51.439 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.439 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.439 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.439 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.439 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:51.439 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.439 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.439 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.439 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.439 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.439 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.439 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.439 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.439 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.698 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.698 "name": "raid_bdev1", 00:11:51.698 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:51.698 "strip_size_kb": 0, 00:11:51.698 "state": "online", 00:11:51.698 "raid_level": "raid1", 00:11:51.698 "superblock": true, 00:11:51.698 "num_base_bdevs": 2, 00:11:51.698 "num_base_bdevs_discovered": 2, 00:11:51.698 "num_base_bdevs_operational": 2, 00:11:51.698 "base_bdevs_list": [ 00:11:51.698 { 00:11:51.698 "name": "spare", 00:11:51.698 "uuid": "e8a45fb8-77a1-574d-aa21-54325397d2e8", 00:11:51.698 "is_configured": true, 00:11:51.698 "data_offset": 2048, 00:11:51.698 "data_size": 63488 00:11:51.698 }, 00:11:51.698 { 00:11:51.698 "name": "BaseBdev2", 00:11:51.698 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:51.698 "is_configured": true, 00:11:51.698 "data_offset": 2048, 00:11:51.698 "data_size": 63488 00:11:51.698 } 00:11:51.698 ] 00:11:51.698 }' 00:11:51.698 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.698 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.958 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:51.958 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.958 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:51.958 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:51.958 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:51.958 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.958 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.958 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.958 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.958 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.958 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:51.958 "name": "raid_bdev1", 00:11:51.958 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:51.958 "strip_size_kb": 0, 00:11:51.958 "state": "online", 00:11:51.958 "raid_level": "raid1", 00:11:51.958 "superblock": true, 00:11:51.958 "num_base_bdevs": 2, 00:11:51.958 "num_base_bdevs_discovered": 2, 00:11:51.958 "num_base_bdevs_operational": 2, 00:11:51.958 "base_bdevs_list": [ 00:11:51.958 { 00:11:51.958 "name": "spare", 00:11:51.958 "uuid": "e8a45fb8-77a1-574d-aa21-54325397d2e8", 00:11:51.958 "is_configured": true, 00:11:51.958 "data_offset": 2048, 00:11:51.958 "data_size": 63488 00:11:51.958 }, 00:11:51.958 { 00:11:51.958 "name": "BaseBdev2", 00:11:51.958 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:51.958 "is_configured": true, 00:11:51.958 "data_offset": 2048, 00:11:51.958 "data_size": 63488 00:11:51.958 } 00:11:51.958 ] 00:11:51.958 }' 00:11:51.958 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:51.958 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:51.958 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.217 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:52.217 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.217 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:52.217 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.217 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.217 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.217 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:52.217 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:52.218 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.218 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.218 [2024-10-15 01:13:04.767729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:52.218 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.218 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:52.218 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.218 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.218 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.218 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.218 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:52.218 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.218 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.218 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.218 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.218 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.218 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.218 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.218 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.218 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.218 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.218 "name": "raid_bdev1", 00:11:52.218 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:52.218 "strip_size_kb": 0, 00:11:52.218 "state": "online", 00:11:52.218 "raid_level": "raid1", 00:11:52.218 "superblock": true, 00:11:52.218 "num_base_bdevs": 2, 00:11:52.218 "num_base_bdevs_discovered": 1, 00:11:52.218 "num_base_bdevs_operational": 1, 00:11:52.218 "base_bdevs_list": [ 00:11:52.218 { 00:11:52.218 "name": null, 00:11:52.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.218 "is_configured": false, 00:11:52.218 "data_offset": 0, 00:11:52.218 "data_size": 63488 00:11:52.218 }, 00:11:52.218 { 00:11:52.218 "name": "BaseBdev2", 00:11:52.218 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:52.218 "is_configured": true, 00:11:52.218 "data_offset": 2048, 00:11:52.218 "data_size": 63488 00:11:52.218 } 00:11:52.218 ] 00:11:52.218 }' 00:11:52.218 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.218 01:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.477 01:13:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:52.477 01:13:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.477 01:13:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.477 [2024-10-15 01:13:05.199055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:52.477 [2024-10-15 01:13:05.199308] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:52.477 [2024-10-15 01:13:05.199366] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:52.477 [2024-10-15 01:13:05.199438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:52.759 [2024-10-15 01:13:05.204765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000277f0 00:11:52.759 01:13:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.759 01:13:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:52.759 [2024-10-15 01:13:05.206614] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:53.710 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:53.710 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.710 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:53.710 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:53.710 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.710 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.710 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.710 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.710 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.710 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.710 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.710 "name": "raid_bdev1", 00:11:53.710 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:53.710 "strip_size_kb": 0, 00:11:53.710 "state": "online", 00:11:53.710 "raid_level": "raid1", 00:11:53.710 "superblock": true, 00:11:53.710 "num_base_bdevs": 2, 00:11:53.710 "num_base_bdevs_discovered": 2, 00:11:53.710 "num_base_bdevs_operational": 2, 00:11:53.710 "process": { 00:11:53.710 "type": "rebuild", 00:11:53.710 "target": "spare", 00:11:53.710 "progress": { 00:11:53.710 "blocks": 20480, 00:11:53.710 "percent": 32 00:11:53.710 } 00:11:53.710 }, 00:11:53.710 "base_bdevs_list": [ 00:11:53.710 { 00:11:53.710 "name": "spare", 00:11:53.710 "uuid": "e8a45fb8-77a1-574d-aa21-54325397d2e8", 00:11:53.710 "is_configured": true, 00:11:53.710 "data_offset": 2048, 00:11:53.710 "data_size": 63488 00:11:53.710 }, 00:11:53.710 { 00:11:53.710 "name": "BaseBdev2", 00:11:53.710 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:53.710 "is_configured": true, 00:11:53.710 "data_offset": 2048, 00:11:53.710 "data_size": 63488 00:11:53.710 } 00:11:53.710 ] 00:11:53.710 }' 00:11:53.710 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.710 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:53.710 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.710 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:53.710 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:53.710 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.710 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.711 [2024-10-15 01:13:06.371307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:53.711 [2024-10-15 01:13:06.411151] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:53.711 [2024-10-15 01:13:06.411285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.711 [2024-10-15 01:13:06.411302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:53.711 [2024-10-15 01:13:06.411311] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:53.711 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.711 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:53.711 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.711 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.711 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.711 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.711 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:53.711 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.711 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.711 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.711 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.711 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.711 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.711 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.711 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.970 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.970 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.970 "name": "raid_bdev1", 00:11:53.970 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:53.970 "strip_size_kb": 0, 00:11:53.970 "state": "online", 00:11:53.970 "raid_level": "raid1", 00:11:53.970 "superblock": true, 00:11:53.970 "num_base_bdevs": 2, 00:11:53.970 "num_base_bdevs_discovered": 1, 00:11:53.970 "num_base_bdevs_operational": 1, 00:11:53.970 "base_bdevs_list": [ 00:11:53.970 { 00:11:53.970 "name": null, 00:11:53.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.970 "is_configured": false, 00:11:53.970 "data_offset": 0, 00:11:53.970 "data_size": 63488 00:11:53.970 }, 00:11:53.970 { 00:11:53.970 "name": "BaseBdev2", 00:11:53.970 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:53.970 "is_configured": true, 00:11:53.970 "data_offset": 2048, 00:11:53.970 "data_size": 63488 00:11:53.970 } 00:11:53.970 ] 00:11:53.970 }' 00:11:53.970 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.970 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.231 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:54.231 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.231 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.231 [2024-10-15 01:13:06.875767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:54.231 [2024-10-15 01:13:06.875876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.231 [2024-10-15 01:13:06.875918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:54.231 [2024-10-15 01:13:06.875948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.231 [2024-10-15 01:13:06.876433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.231 [2024-10-15 01:13:06.876493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:54.231 [2024-10-15 01:13:06.876607] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:54.231 [2024-10-15 01:13:06.876649] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:54.231 [2024-10-15 01:13:06.876689] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:54.231 [2024-10-15 01:13:06.876737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:54.231 [2024-10-15 01:13:06.882055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000278c0 00:11:54.231 spare 00:11:54.231 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.231 01:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:54.231 [2024-10-15 01:13:06.883953] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:55.171 01:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:55.171 01:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.171 01:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:55.171 01:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:55.171 01:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.171 01:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.171 01:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.171 01:13:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.171 01:13:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.434 01:13:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.434 01:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.434 "name": "raid_bdev1", 00:11:55.434 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:55.434 "strip_size_kb": 0, 00:11:55.434 "state": "online", 00:11:55.434 "raid_level": "raid1", 00:11:55.434 "superblock": true, 00:11:55.434 "num_base_bdevs": 2, 00:11:55.434 "num_base_bdevs_discovered": 2, 00:11:55.434 "num_base_bdevs_operational": 2, 00:11:55.434 "process": { 00:11:55.434 "type": "rebuild", 00:11:55.434 "target": "spare", 00:11:55.434 "progress": { 00:11:55.434 "blocks": 20480, 00:11:55.434 "percent": 32 00:11:55.434 } 00:11:55.434 }, 00:11:55.434 "base_bdevs_list": [ 00:11:55.434 { 00:11:55.434 "name": "spare", 00:11:55.434 "uuid": "e8a45fb8-77a1-574d-aa21-54325397d2e8", 00:11:55.434 "is_configured": true, 00:11:55.434 "data_offset": 2048, 00:11:55.434 "data_size": 63488 00:11:55.434 }, 00:11:55.434 { 00:11:55.434 "name": "BaseBdev2", 00:11:55.434 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:55.434 "is_configured": true, 00:11:55.434 "data_offset": 2048, 00:11:55.434 "data_size": 63488 00:11:55.434 } 00:11:55.434 ] 00:11:55.434 }' 00:11:55.434 01:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.434 01:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:55.434 01:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.434 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:55.434 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:55.434 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.434 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.434 [2024-10-15 01:13:08.020430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:55.434 [2024-10-15 01:13:08.088278] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:55.435 [2024-10-15 01:13:08.088378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.435 [2024-10-15 01:13:08.088398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:55.435 [2024-10-15 01:13:08.088405] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:55.435 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.435 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:55.435 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.435 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.435 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.435 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.435 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:55.435 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.435 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.435 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.435 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.435 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.435 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.435 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.435 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.435 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.435 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.435 "name": "raid_bdev1", 00:11:55.435 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:55.435 "strip_size_kb": 0, 00:11:55.435 "state": "online", 00:11:55.435 "raid_level": "raid1", 00:11:55.435 "superblock": true, 00:11:55.435 "num_base_bdevs": 2, 00:11:55.436 "num_base_bdevs_discovered": 1, 00:11:55.436 "num_base_bdevs_operational": 1, 00:11:55.436 "base_bdevs_list": [ 00:11:55.436 { 00:11:55.436 "name": null, 00:11:55.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.436 "is_configured": false, 00:11:55.436 "data_offset": 0, 00:11:55.436 "data_size": 63488 00:11:55.436 }, 00:11:55.436 { 00:11:55.436 "name": "BaseBdev2", 00:11:55.436 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:55.436 "is_configured": true, 00:11:55.436 "data_offset": 2048, 00:11:55.436 "data_size": 63488 00:11:55.436 } 00:11:55.436 ] 00:11:55.436 }' 00:11:55.436 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.436 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:56.004 "name": "raid_bdev1", 00:11:56.004 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:56.004 "strip_size_kb": 0, 00:11:56.004 "state": "online", 00:11:56.004 "raid_level": "raid1", 00:11:56.004 "superblock": true, 00:11:56.004 "num_base_bdevs": 2, 00:11:56.004 "num_base_bdevs_discovered": 1, 00:11:56.004 "num_base_bdevs_operational": 1, 00:11:56.004 "base_bdevs_list": [ 00:11:56.004 { 00:11:56.004 "name": null, 00:11:56.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.004 "is_configured": false, 00:11:56.004 "data_offset": 0, 00:11:56.004 "data_size": 63488 00:11:56.004 }, 00:11:56.004 { 00:11:56.004 "name": "BaseBdev2", 00:11:56.004 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:56.004 "is_configured": true, 00:11:56.004 "data_offset": 2048, 00:11:56.004 "data_size": 63488 00:11:56.004 } 00:11:56.004 ] 00:11:56.004 }' 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.004 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.264 [2024-10-15 01:13:08.728334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:56.264 [2024-10-15 01:13:08.728431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.264 [2024-10-15 01:13:08.728461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:56.264 [2024-10-15 01:13:08.728471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.264 [2024-10-15 01:13:08.728864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.264 [2024-10-15 01:13:08.728881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:56.264 [2024-10-15 01:13:08.728958] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:56.264 [2024-10-15 01:13:08.728970] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:56.264 [2024-10-15 01:13:08.728982] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:56.264 [2024-10-15 01:13:08.728993] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:56.264 BaseBdev1 00:11:56.264 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.264 01:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:57.204 01:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:57.204 01:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.204 01:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.204 01:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.204 01:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.204 01:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:57.204 01:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.204 01:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.204 01:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.204 01:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.204 01:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.204 01:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.204 01:13:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.204 01:13:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.204 01:13:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.204 01:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.204 "name": "raid_bdev1", 00:11:57.204 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:57.204 "strip_size_kb": 0, 00:11:57.204 "state": "online", 00:11:57.204 "raid_level": "raid1", 00:11:57.204 "superblock": true, 00:11:57.204 "num_base_bdevs": 2, 00:11:57.204 "num_base_bdevs_discovered": 1, 00:11:57.204 "num_base_bdevs_operational": 1, 00:11:57.205 "base_bdevs_list": [ 00:11:57.205 { 00:11:57.205 "name": null, 00:11:57.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.205 "is_configured": false, 00:11:57.205 "data_offset": 0, 00:11:57.205 "data_size": 63488 00:11:57.205 }, 00:11:57.205 { 00:11:57.205 "name": "BaseBdev2", 00:11:57.205 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:57.205 "is_configured": true, 00:11:57.205 "data_offset": 2048, 00:11:57.205 "data_size": 63488 00:11:57.205 } 00:11:57.205 ] 00:11:57.205 }' 00:11:57.205 01:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.205 01:13:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.464 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:57.464 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:57.464 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:57.464 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:57.464 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:57.464 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.464 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.464 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.464 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.464 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.724 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:57.724 "name": "raid_bdev1", 00:11:57.724 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:57.724 "strip_size_kb": 0, 00:11:57.724 "state": "online", 00:11:57.724 "raid_level": "raid1", 00:11:57.724 "superblock": true, 00:11:57.724 "num_base_bdevs": 2, 00:11:57.724 "num_base_bdevs_discovered": 1, 00:11:57.724 "num_base_bdevs_operational": 1, 00:11:57.724 "base_bdevs_list": [ 00:11:57.724 { 00:11:57.724 "name": null, 00:11:57.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.724 "is_configured": false, 00:11:57.724 "data_offset": 0, 00:11:57.724 "data_size": 63488 00:11:57.724 }, 00:11:57.724 { 00:11:57.724 "name": "BaseBdev2", 00:11:57.724 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:57.724 "is_configured": true, 00:11:57.724 "data_offset": 2048, 00:11:57.724 "data_size": 63488 00:11:57.724 } 00:11:57.724 ] 00:11:57.724 }' 00:11:57.724 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:57.724 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:57.724 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:57.724 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:57.724 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:57.724 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:11:57.724 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:57.724 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:57.724 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:57.724 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:57.724 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:57.724 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:57.724 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.724 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.724 [2024-10-15 01:13:10.278073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.724 [2024-10-15 01:13:10.278236] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:57.724 [2024-10-15 01:13:10.278253] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:57.724 request: 00:11:57.724 { 00:11:57.724 "base_bdev": "BaseBdev1", 00:11:57.724 "raid_bdev": "raid_bdev1", 00:11:57.724 "method": "bdev_raid_add_base_bdev", 00:11:57.724 "req_id": 1 00:11:57.724 } 00:11:57.724 Got JSON-RPC error response 00:11:57.724 response: 00:11:57.724 { 00:11:57.724 "code": -22, 00:11:57.724 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:57.724 } 00:11:57.724 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:57.724 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:11:57.724 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:57.724 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:57.724 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:57.724 01:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:58.663 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:58.663 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.663 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.663 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.663 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.663 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:58.663 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.663 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.663 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.663 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.663 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.663 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.663 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.663 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.663 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.663 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.663 "name": "raid_bdev1", 00:11:58.663 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:58.663 "strip_size_kb": 0, 00:11:58.663 "state": "online", 00:11:58.663 "raid_level": "raid1", 00:11:58.663 "superblock": true, 00:11:58.663 "num_base_bdevs": 2, 00:11:58.663 "num_base_bdevs_discovered": 1, 00:11:58.663 "num_base_bdevs_operational": 1, 00:11:58.663 "base_bdevs_list": [ 00:11:58.663 { 00:11:58.663 "name": null, 00:11:58.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.663 "is_configured": false, 00:11:58.663 "data_offset": 0, 00:11:58.663 "data_size": 63488 00:11:58.663 }, 00:11:58.663 { 00:11:58.663 "name": "BaseBdev2", 00:11:58.663 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:58.663 "is_configured": true, 00:11:58.663 "data_offset": 2048, 00:11:58.663 "data_size": 63488 00:11:58.663 } 00:11:58.663 ] 00:11:58.663 }' 00:11:58.663 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.663 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.230 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:59.230 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:59.230 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:59.230 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:59.230 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:59.230 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.230 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.230 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.230 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.231 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.231 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:59.231 "name": "raid_bdev1", 00:11:59.231 "uuid": "80d210b6-8e8a-479f-88d8-700a5a0dbbca", 00:11:59.231 "strip_size_kb": 0, 00:11:59.231 "state": "online", 00:11:59.231 "raid_level": "raid1", 00:11:59.231 "superblock": true, 00:11:59.231 "num_base_bdevs": 2, 00:11:59.231 "num_base_bdevs_discovered": 1, 00:11:59.231 "num_base_bdevs_operational": 1, 00:11:59.231 "base_bdevs_list": [ 00:11:59.231 { 00:11:59.231 "name": null, 00:11:59.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.231 "is_configured": false, 00:11:59.231 "data_offset": 0, 00:11:59.231 "data_size": 63488 00:11:59.231 }, 00:11:59.231 { 00:11:59.231 "name": "BaseBdev2", 00:11:59.231 "uuid": "9fa54453-f5d8-5393-9e42-fdfc5682cf17", 00:11:59.231 "is_configured": true, 00:11:59.231 "data_offset": 2048, 00:11:59.231 "data_size": 63488 00:11:59.231 } 00:11:59.231 ] 00:11:59.231 }' 00:11:59.231 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:59.231 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:59.231 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:59.231 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:59.231 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87225 00:11:59.231 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 87225 ']' 00:11:59.231 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 87225 00:11:59.231 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:11:59.231 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:59.231 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87225 00:11:59.231 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:59.231 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:59.231 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87225' 00:11:59.231 killing process with pid 87225 00:11:59.231 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 87225 00:11:59.231 Received shutdown signal, test time was about 17.821651 seconds 00:11:59.231 00:11:59.231 Latency(us) 00:11:59.231 [2024-10-15T01:13:11.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:59.231 [2024-10-15T01:13:11.955Z] =================================================================================================================== 00:11:59.231 [2024-10-15T01:13:11.955Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:59.231 [2024-10-15 01:13:11.900894] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:59.231 [2024-10-15 01:13:11.901034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:59.231 01:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 87225 00:11:59.231 [2024-10-15 01:13:11.901089] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:59.231 [2024-10-15 01:13:11.901101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:11:59.231 [2024-10-15 01:13:11.927806] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:59.490 ************************************ 00:11:59.490 END TEST raid_rebuild_test_sb_io 00:11:59.490 ************************************ 00:11:59.490 01:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:59.490 00:11:59.490 real 0m19.560s 00:11:59.490 user 0m25.698s 00:11:59.490 sys 0m2.100s 00:11:59.490 01:13:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.490 01:13:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.490 01:13:12 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:59.490 01:13:12 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:11:59.490 01:13:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:59.490 01:13:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:59.490 01:13:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:59.490 ************************************ 00:11:59.490 START TEST raid_rebuild_test 00:11:59.490 ************************************ 00:11:59.490 01:13:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=87922 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:59.491 01:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 87922 00:11:59.750 01:13:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 87922 ']' 00:11:59.750 01:13:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.750 01:13:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:59.750 01:13:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.750 01:13:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:59.750 01:13:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.750 [2024-10-15 01:13:12.292909] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:11:59.750 [2024-10-15 01:13:12.293086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:11:59.750 Zero copy mechanism will not be used. 00:11:59.750 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87922 ] 00:11:59.750 [2024-10-15 01:13:12.435715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.750 [2024-10-15 01:13:12.462352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.010 [2024-10-15 01:13:12.504858] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.010 [2024-10-15 01:13:12.504975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.580 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:00.580 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:12:00.580 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:00.580 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:00.580 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.580 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.580 BaseBdev1_malloc 00:12:00.580 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.580 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:00.580 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.580 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.580 [2024-10-15 01:13:13.135208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:00.580 [2024-10-15 01:13:13.135328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.580 [2024-10-15 01:13:13.135372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:00.580 [2024-10-15 01:13:13.135385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.580 [2024-10-15 01:13:13.137432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.580 [2024-10-15 01:13:13.137469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:00.580 BaseBdev1 00:12:00.580 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.580 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:00.580 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:00.580 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.580 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.580 BaseBdev2_malloc 00:12:00.580 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.580 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:00.580 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.580 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.580 [2024-10-15 01:13:13.163679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:00.580 [2024-10-15 01:13:13.163732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.580 [2024-10-15 01:13:13.163768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:00.580 [2024-10-15 01:13:13.163776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.580 [2024-10-15 01:13:13.165857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.580 [2024-10-15 01:13:13.165896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:00.580 BaseBdev2 00:12:00.580 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.581 BaseBdev3_malloc 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.581 [2024-10-15 01:13:13.192311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:00.581 [2024-10-15 01:13:13.192413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.581 [2024-10-15 01:13:13.192441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:00.581 [2024-10-15 01:13:13.192449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.581 [2024-10-15 01:13:13.194446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.581 [2024-10-15 01:13:13.194479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:00.581 BaseBdev3 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.581 BaseBdev4_malloc 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.581 [2024-10-15 01:13:13.236857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:00.581 [2024-10-15 01:13:13.236956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.581 [2024-10-15 01:13:13.237005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:00.581 [2024-10-15 01:13:13.237026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.581 [2024-10-15 01:13:13.241308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.581 [2024-10-15 01:13:13.241356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:00.581 BaseBdev4 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.581 spare_malloc 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.581 spare_delay 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.581 [2024-10-15 01:13:13.278684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:00.581 [2024-10-15 01:13:13.278728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.581 [2024-10-15 01:13:13.278746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:00.581 [2024-10-15 01:13:13.278754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.581 [2024-10-15 01:13:13.280834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.581 [2024-10-15 01:13:13.280908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:00.581 spare 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.581 [2024-10-15 01:13:13.290724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.581 [2024-10-15 01:13:13.292580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.581 [2024-10-15 01:13:13.292640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.581 [2024-10-15 01:13:13.292687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:00.581 [2024-10-15 01:13:13.292763] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:00.581 [2024-10-15 01:13:13.292772] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:00.581 [2024-10-15 01:13:13.293036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:00.581 [2024-10-15 01:13:13.293161] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:00.581 [2024-10-15 01:13:13.293173] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:00.581 [2024-10-15 01:13:13.293296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.581 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.841 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.841 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.841 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.841 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.841 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.841 "name": "raid_bdev1", 00:12:00.841 "uuid": "6dbd7b0f-8309-4716-871e-40f487f3cabb", 00:12:00.841 "strip_size_kb": 0, 00:12:00.841 "state": "online", 00:12:00.841 "raid_level": "raid1", 00:12:00.841 "superblock": false, 00:12:00.841 "num_base_bdevs": 4, 00:12:00.841 "num_base_bdevs_discovered": 4, 00:12:00.841 "num_base_bdevs_operational": 4, 00:12:00.841 "base_bdevs_list": [ 00:12:00.841 { 00:12:00.841 "name": "BaseBdev1", 00:12:00.841 "uuid": "3a66ecbd-2d3c-5838-abc2-12aa5faab564", 00:12:00.841 "is_configured": true, 00:12:00.841 "data_offset": 0, 00:12:00.841 "data_size": 65536 00:12:00.841 }, 00:12:00.841 { 00:12:00.841 "name": "BaseBdev2", 00:12:00.841 "uuid": "c97f13a8-d2e6-5953-924d-5ee6a34d032b", 00:12:00.841 "is_configured": true, 00:12:00.841 "data_offset": 0, 00:12:00.841 "data_size": 65536 00:12:00.841 }, 00:12:00.841 { 00:12:00.841 "name": "BaseBdev3", 00:12:00.841 "uuid": "080e87ae-ea63-57a3-8315-7aee74d783d4", 00:12:00.841 "is_configured": true, 00:12:00.841 "data_offset": 0, 00:12:00.841 "data_size": 65536 00:12:00.841 }, 00:12:00.841 { 00:12:00.841 "name": "BaseBdev4", 00:12:00.841 "uuid": "fd0d9a13-a10b-5aa8-87a7-22b1c6235a22", 00:12:00.841 "is_configured": true, 00:12:00.841 "data_offset": 0, 00:12:00.841 "data_size": 65536 00:12:00.841 } 00:12:00.841 ] 00:12:00.841 }' 00:12:00.841 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.841 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.101 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:01.101 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:01.101 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.101 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.101 [2024-10-15 01:13:13.702362] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:01.101 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.101 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:01.101 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.101 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.101 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.101 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:01.101 01:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.101 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:01.101 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:01.101 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:01.101 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:01.101 01:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:01.101 01:13:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:01.101 01:13:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:01.101 01:13:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:01.101 01:13:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:01.101 01:13:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:01.102 01:13:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:01.102 01:13:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:01.102 01:13:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:01.102 01:13:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:01.365 [2024-10-15 01:13:13.989581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:12:01.365 /dev/nbd0 00:12:01.365 01:13:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:01.365 01:13:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:01.365 01:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:01.365 01:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:01.365 01:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:01.365 01:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:01.365 01:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:01.365 01:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:01.365 01:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:01.365 01:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:01.365 01:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:01.365 1+0 records in 00:12:01.365 1+0 records out 00:12:01.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509747 s, 8.0 MB/s 00:12:01.366 01:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.366 01:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:01.366 01:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.366 01:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:01.366 01:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:01.366 01:13:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:01.366 01:13:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:01.366 01:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:01.366 01:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:01.366 01:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:06.638 65536+0 records in 00:12:06.638 65536+0 records out 00:12:06.638 33554432 bytes (34 MB, 32 MiB) copied, 5.17526 s, 6.5 MB/s 00:12:06.638 01:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:06.638 01:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:06.638 01:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:06.638 01:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:06.638 01:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:06.638 01:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.638 01:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:06.897 [2024-10-15 01:13:19.446452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.897 [2024-10-15 01:13:19.462489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.897 01:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.897 "name": "raid_bdev1", 00:12:06.897 "uuid": "6dbd7b0f-8309-4716-871e-40f487f3cabb", 00:12:06.897 "strip_size_kb": 0, 00:12:06.897 "state": "online", 00:12:06.897 "raid_level": "raid1", 00:12:06.897 "superblock": false, 00:12:06.897 "num_base_bdevs": 4, 00:12:06.897 "num_base_bdevs_discovered": 3, 00:12:06.897 "num_base_bdevs_operational": 3, 00:12:06.897 "base_bdevs_list": [ 00:12:06.897 { 00:12:06.897 "name": null, 00:12:06.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.897 "is_configured": false, 00:12:06.897 "data_offset": 0, 00:12:06.897 "data_size": 65536 00:12:06.897 }, 00:12:06.897 { 00:12:06.897 "name": "BaseBdev2", 00:12:06.898 "uuid": "c97f13a8-d2e6-5953-924d-5ee6a34d032b", 00:12:06.898 "is_configured": true, 00:12:06.898 "data_offset": 0, 00:12:06.898 "data_size": 65536 00:12:06.898 }, 00:12:06.898 { 00:12:06.898 "name": "BaseBdev3", 00:12:06.898 "uuid": "080e87ae-ea63-57a3-8315-7aee74d783d4", 00:12:06.898 "is_configured": true, 00:12:06.898 "data_offset": 0, 00:12:06.898 "data_size": 65536 00:12:06.898 }, 00:12:06.898 { 00:12:06.898 "name": "BaseBdev4", 00:12:06.898 "uuid": "fd0d9a13-a10b-5aa8-87a7-22b1c6235a22", 00:12:06.898 "is_configured": true, 00:12:06.898 "data_offset": 0, 00:12:06.898 "data_size": 65536 00:12:06.898 } 00:12:06.898 ] 00:12:06.898 }' 00:12:06.898 01:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.898 01:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.465 01:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:07.465 01:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.465 01:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.465 [2024-10-15 01:13:19.929753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:07.465 [2024-10-15 01:13:19.934053] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d063c0 00:12:07.465 01:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.465 01:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:07.465 [2024-10-15 01:13:19.935915] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:08.402 01:13:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:08.402 01:13:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.402 01:13:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:08.402 01:13:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:08.402 01:13:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.402 01:13:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.402 01:13:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.402 01:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.402 01:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.402 01:13:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.402 01:13:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.402 "name": "raid_bdev1", 00:12:08.402 "uuid": "6dbd7b0f-8309-4716-871e-40f487f3cabb", 00:12:08.402 "strip_size_kb": 0, 00:12:08.402 "state": "online", 00:12:08.402 "raid_level": "raid1", 00:12:08.402 "superblock": false, 00:12:08.402 "num_base_bdevs": 4, 00:12:08.402 "num_base_bdevs_discovered": 4, 00:12:08.402 "num_base_bdevs_operational": 4, 00:12:08.402 "process": { 00:12:08.402 "type": "rebuild", 00:12:08.402 "target": "spare", 00:12:08.402 "progress": { 00:12:08.402 "blocks": 20480, 00:12:08.402 "percent": 31 00:12:08.402 } 00:12:08.402 }, 00:12:08.402 "base_bdevs_list": [ 00:12:08.402 { 00:12:08.402 "name": "spare", 00:12:08.402 "uuid": "7aa0ce08-c94e-5759-9fe2-2043dd94acb1", 00:12:08.402 "is_configured": true, 00:12:08.402 "data_offset": 0, 00:12:08.402 "data_size": 65536 00:12:08.402 }, 00:12:08.402 { 00:12:08.402 "name": "BaseBdev2", 00:12:08.402 "uuid": "c97f13a8-d2e6-5953-924d-5ee6a34d032b", 00:12:08.402 "is_configured": true, 00:12:08.402 "data_offset": 0, 00:12:08.402 "data_size": 65536 00:12:08.402 }, 00:12:08.402 { 00:12:08.402 "name": "BaseBdev3", 00:12:08.402 "uuid": "080e87ae-ea63-57a3-8315-7aee74d783d4", 00:12:08.402 "is_configured": true, 00:12:08.402 "data_offset": 0, 00:12:08.402 "data_size": 65536 00:12:08.402 }, 00:12:08.402 { 00:12:08.402 "name": "BaseBdev4", 00:12:08.402 "uuid": "fd0d9a13-a10b-5aa8-87a7-22b1c6235a22", 00:12:08.402 "is_configured": true, 00:12:08.402 "data_offset": 0, 00:12:08.402 "data_size": 65536 00:12:08.402 } 00:12:08.402 ] 00:12:08.402 }' 00:12:08.402 01:13:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.402 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:08.402 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.402 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:08.402 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:08.402 01:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.402 01:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.402 [2024-10-15 01:13:21.093067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:08.661 [2024-10-15 01:13:21.140978] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:08.661 [2024-10-15 01:13:21.141058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.661 [2024-10-15 01:13:21.141078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:08.661 [2024-10-15 01:13:21.141087] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:08.661 01:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.661 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:08.661 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.661 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.661 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.661 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.661 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.661 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.661 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.661 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.661 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.661 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.661 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.661 01:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.661 01:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.661 01:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.661 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.661 "name": "raid_bdev1", 00:12:08.661 "uuid": "6dbd7b0f-8309-4716-871e-40f487f3cabb", 00:12:08.661 "strip_size_kb": 0, 00:12:08.661 "state": "online", 00:12:08.661 "raid_level": "raid1", 00:12:08.661 "superblock": false, 00:12:08.661 "num_base_bdevs": 4, 00:12:08.661 "num_base_bdevs_discovered": 3, 00:12:08.661 "num_base_bdevs_operational": 3, 00:12:08.661 "base_bdevs_list": [ 00:12:08.661 { 00:12:08.661 "name": null, 00:12:08.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.661 "is_configured": false, 00:12:08.661 "data_offset": 0, 00:12:08.661 "data_size": 65536 00:12:08.661 }, 00:12:08.661 { 00:12:08.661 "name": "BaseBdev2", 00:12:08.661 "uuid": "c97f13a8-d2e6-5953-924d-5ee6a34d032b", 00:12:08.661 "is_configured": true, 00:12:08.661 "data_offset": 0, 00:12:08.661 "data_size": 65536 00:12:08.661 }, 00:12:08.661 { 00:12:08.661 "name": "BaseBdev3", 00:12:08.661 "uuid": "080e87ae-ea63-57a3-8315-7aee74d783d4", 00:12:08.661 "is_configured": true, 00:12:08.661 "data_offset": 0, 00:12:08.661 "data_size": 65536 00:12:08.661 }, 00:12:08.661 { 00:12:08.661 "name": "BaseBdev4", 00:12:08.661 "uuid": "fd0d9a13-a10b-5aa8-87a7-22b1c6235a22", 00:12:08.661 "is_configured": true, 00:12:08.661 "data_offset": 0, 00:12:08.661 "data_size": 65536 00:12:08.661 } 00:12:08.661 ] 00:12:08.661 }' 00:12:08.661 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.661 01:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.920 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:08.920 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.920 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:08.920 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:08.920 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.920 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.920 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.920 01:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.920 01:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.920 01:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.920 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.920 "name": "raid_bdev1", 00:12:08.920 "uuid": "6dbd7b0f-8309-4716-871e-40f487f3cabb", 00:12:08.920 "strip_size_kb": 0, 00:12:08.920 "state": "online", 00:12:08.920 "raid_level": "raid1", 00:12:08.920 "superblock": false, 00:12:08.920 "num_base_bdevs": 4, 00:12:08.920 "num_base_bdevs_discovered": 3, 00:12:08.920 "num_base_bdevs_operational": 3, 00:12:08.920 "base_bdevs_list": [ 00:12:08.920 { 00:12:08.920 "name": null, 00:12:08.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.920 "is_configured": false, 00:12:08.920 "data_offset": 0, 00:12:08.920 "data_size": 65536 00:12:08.920 }, 00:12:08.920 { 00:12:08.920 "name": "BaseBdev2", 00:12:08.920 "uuid": "c97f13a8-d2e6-5953-924d-5ee6a34d032b", 00:12:08.920 "is_configured": true, 00:12:08.920 "data_offset": 0, 00:12:08.921 "data_size": 65536 00:12:08.921 }, 00:12:08.921 { 00:12:08.921 "name": "BaseBdev3", 00:12:08.921 "uuid": "080e87ae-ea63-57a3-8315-7aee74d783d4", 00:12:08.921 "is_configured": true, 00:12:08.921 "data_offset": 0, 00:12:08.921 "data_size": 65536 00:12:08.921 }, 00:12:08.921 { 00:12:08.921 "name": "BaseBdev4", 00:12:08.921 "uuid": "fd0d9a13-a10b-5aa8-87a7-22b1c6235a22", 00:12:08.921 "is_configured": true, 00:12:08.921 "data_offset": 0, 00:12:08.921 "data_size": 65536 00:12:08.921 } 00:12:08.921 ] 00:12:08.921 }' 00:12:08.921 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:09.179 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:09.179 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:09.179 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:09.179 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:09.179 01:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.179 01:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.179 [2024-10-15 01:13:21.720743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:09.179 [2024-10-15 01:13:21.725129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06490 00:12:09.179 01:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.179 01:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:09.179 [2024-10-15 01:13:21.727013] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:10.127 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.127 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.127 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.127 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.127 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.127 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.127 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.127 01:13:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.127 01:13:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.127 01:13:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.127 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.127 "name": "raid_bdev1", 00:12:10.127 "uuid": "6dbd7b0f-8309-4716-871e-40f487f3cabb", 00:12:10.127 "strip_size_kb": 0, 00:12:10.127 "state": "online", 00:12:10.127 "raid_level": "raid1", 00:12:10.127 "superblock": false, 00:12:10.127 "num_base_bdevs": 4, 00:12:10.127 "num_base_bdevs_discovered": 4, 00:12:10.127 "num_base_bdevs_operational": 4, 00:12:10.127 "process": { 00:12:10.127 "type": "rebuild", 00:12:10.127 "target": "spare", 00:12:10.127 "progress": { 00:12:10.127 "blocks": 20480, 00:12:10.127 "percent": 31 00:12:10.127 } 00:12:10.127 }, 00:12:10.127 "base_bdevs_list": [ 00:12:10.127 { 00:12:10.127 "name": "spare", 00:12:10.127 "uuid": "7aa0ce08-c94e-5759-9fe2-2043dd94acb1", 00:12:10.127 "is_configured": true, 00:12:10.127 "data_offset": 0, 00:12:10.127 "data_size": 65536 00:12:10.127 }, 00:12:10.127 { 00:12:10.127 "name": "BaseBdev2", 00:12:10.127 "uuid": "c97f13a8-d2e6-5953-924d-5ee6a34d032b", 00:12:10.127 "is_configured": true, 00:12:10.127 "data_offset": 0, 00:12:10.127 "data_size": 65536 00:12:10.127 }, 00:12:10.127 { 00:12:10.127 "name": "BaseBdev3", 00:12:10.127 "uuid": "080e87ae-ea63-57a3-8315-7aee74d783d4", 00:12:10.127 "is_configured": true, 00:12:10.127 "data_offset": 0, 00:12:10.127 "data_size": 65536 00:12:10.127 }, 00:12:10.127 { 00:12:10.127 "name": "BaseBdev4", 00:12:10.127 "uuid": "fd0d9a13-a10b-5aa8-87a7-22b1c6235a22", 00:12:10.127 "is_configured": true, 00:12:10.127 "data_offset": 0, 00:12:10.127 "data_size": 65536 00:12:10.127 } 00:12:10.127 ] 00:12:10.127 }' 00:12:10.127 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.128 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:10.128 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.386 [2024-10-15 01:13:22.888011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:10.386 [2024-10-15 01:13:22.931908] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d06490 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.386 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.386 "name": "raid_bdev1", 00:12:10.386 "uuid": "6dbd7b0f-8309-4716-871e-40f487f3cabb", 00:12:10.386 "strip_size_kb": 0, 00:12:10.386 "state": "online", 00:12:10.386 "raid_level": "raid1", 00:12:10.386 "superblock": false, 00:12:10.386 "num_base_bdevs": 4, 00:12:10.387 "num_base_bdevs_discovered": 3, 00:12:10.387 "num_base_bdevs_operational": 3, 00:12:10.387 "process": { 00:12:10.387 "type": "rebuild", 00:12:10.387 "target": "spare", 00:12:10.387 "progress": { 00:12:10.387 "blocks": 24576, 00:12:10.387 "percent": 37 00:12:10.387 } 00:12:10.387 }, 00:12:10.387 "base_bdevs_list": [ 00:12:10.387 { 00:12:10.387 "name": "spare", 00:12:10.387 "uuid": "7aa0ce08-c94e-5759-9fe2-2043dd94acb1", 00:12:10.387 "is_configured": true, 00:12:10.387 "data_offset": 0, 00:12:10.387 "data_size": 65536 00:12:10.387 }, 00:12:10.387 { 00:12:10.387 "name": null, 00:12:10.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.387 "is_configured": false, 00:12:10.387 "data_offset": 0, 00:12:10.387 "data_size": 65536 00:12:10.387 }, 00:12:10.387 { 00:12:10.387 "name": "BaseBdev3", 00:12:10.387 "uuid": "080e87ae-ea63-57a3-8315-7aee74d783d4", 00:12:10.387 "is_configured": true, 00:12:10.387 "data_offset": 0, 00:12:10.387 "data_size": 65536 00:12:10.387 }, 00:12:10.387 { 00:12:10.387 "name": "BaseBdev4", 00:12:10.387 "uuid": "fd0d9a13-a10b-5aa8-87a7-22b1c6235a22", 00:12:10.387 "is_configured": true, 00:12:10.387 "data_offset": 0, 00:12:10.387 "data_size": 65536 00:12:10.387 } 00:12:10.387 ] 00:12:10.387 }' 00:12:10.387 01:13:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.387 01:13:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:10.387 01:13:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.387 01:13:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:10.387 01:13:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=355 00:12:10.387 01:13:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:10.387 01:13:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.387 01:13:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.387 01:13:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.387 01:13:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.387 01:13:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.387 01:13:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.387 01:13:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.387 01:13:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.387 01:13:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.645 01:13:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.645 01:13:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.645 "name": "raid_bdev1", 00:12:10.645 "uuid": "6dbd7b0f-8309-4716-871e-40f487f3cabb", 00:12:10.645 "strip_size_kb": 0, 00:12:10.645 "state": "online", 00:12:10.645 "raid_level": "raid1", 00:12:10.645 "superblock": false, 00:12:10.645 "num_base_bdevs": 4, 00:12:10.645 "num_base_bdevs_discovered": 3, 00:12:10.645 "num_base_bdevs_operational": 3, 00:12:10.645 "process": { 00:12:10.645 "type": "rebuild", 00:12:10.645 "target": "spare", 00:12:10.645 "progress": { 00:12:10.645 "blocks": 26624, 00:12:10.645 "percent": 40 00:12:10.645 } 00:12:10.645 }, 00:12:10.645 "base_bdevs_list": [ 00:12:10.645 { 00:12:10.645 "name": "spare", 00:12:10.645 "uuid": "7aa0ce08-c94e-5759-9fe2-2043dd94acb1", 00:12:10.645 "is_configured": true, 00:12:10.645 "data_offset": 0, 00:12:10.645 "data_size": 65536 00:12:10.645 }, 00:12:10.645 { 00:12:10.645 "name": null, 00:12:10.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.645 "is_configured": false, 00:12:10.645 "data_offset": 0, 00:12:10.645 "data_size": 65536 00:12:10.645 }, 00:12:10.645 { 00:12:10.645 "name": "BaseBdev3", 00:12:10.645 "uuid": "080e87ae-ea63-57a3-8315-7aee74d783d4", 00:12:10.645 "is_configured": true, 00:12:10.645 "data_offset": 0, 00:12:10.645 "data_size": 65536 00:12:10.645 }, 00:12:10.645 { 00:12:10.645 "name": "BaseBdev4", 00:12:10.645 "uuid": "fd0d9a13-a10b-5aa8-87a7-22b1c6235a22", 00:12:10.645 "is_configured": true, 00:12:10.645 "data_offset": 0, 00:12:10.645 "data_size": 65536 00:12:10.645 } 00:12:10.645 ] 00:12:10.645 }' 00:12:10.645 01:13:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.645 01:13:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:10.645 01:13:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.645 01:13:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:10.645 01:13:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:11.581 01:13:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:11.581 01:13:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:11.581 01:13:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.581 01:13:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:11.581 01:13:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:11.581 01:13:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.581 01:13:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.581 01:13:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.581 01:13:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.581 01:13:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.581 01:13:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.581 01:13:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.581 "name": "raid_bdev1", 00:12:11.581 "uuid": "6dbd7b0f-8309-4716-871e-40f487f3cabb", 00:12:11.581 "strip_size_kb": 0, 00:12:11.581 "state": "online", 00:12:11.581 "raid_level": "raid1", 00:12:11.581 "superblock": false, 00:12:11.581 "num_base_bdevs": 4, 00:12:11.581 "num_base_bdevs_discovered": 3, 00:12:11.581 "num_base_bdevs_operational": 3, 00:12:11.581 "process": { 00:12:11.581 "type": "rebuild", 00:12:11.581 "target": "spare", 00:12:11.581 "progress": { 00:12:11.581 "blocks": 51200, 00:12:11.581 "percent": 78 00:12:11.581 } 00:12:11.581 }, 00:12:11.581 "base_bdevs_list": [ 00:12:11.581 { 00:12:11.581 "name": "spare", 00:12:11.581 "uuid": "7aa0ce08-c94e-5759-9fe2-2043dd94acb1", 00:12:11.581 "is_configured": true, 00:12:11.581 "data_offset": 0, 00:12:11.581 "data_size": 65536 00:12:11.581 }, 00:12:11.581 { 00:12:11.581 "name": null, 00:12:11.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.581 "is_configured": false, 00:12:11.581 "data_offset": 0, 00:12:11.581 "data_size": 65536 00:12:11.581 }, 00:12:11.581 { 00:12:11.581 "name": "BaseBdev3", 00:12:11.581 "uuid": "080e87ae-ea63-57a3-8315-7aee74d783d4", 00:12:11.581 "is_configured": true, 00:12:11.581 "data_offset": 0, 00:12:11.581 "data_size": 65536 00:12:11.581 }, 00:12:11.581 { 00:12:11.581 "name": "BaseBdev4", 00:12:11.581 "uuid": "fd0d9a13-a10b-5aa8-87a7-22b1c6235a22", 00:12:11.581 "is_configured": true, 00:12:11.581 "data_offset": 0, 00:12:11.581 "data_size": 65536 00:12:11.581 } 00:12:11.581 ] 00:12:11.581 }' 00:12:11.581 01:13:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.840 01:13:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:11.840 01:13:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.840 01:13:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:11.840 01:13:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:12.407 [2024-10-15 01:13:24.940206] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:12.407 [2024-10-15 01:13:24.940409] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:12.407 [2024-10-15 01:13:24.940462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.974 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:12.974 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.974 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.974 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.974 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.974 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.974 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.974 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.974 01:13:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.974 01:13:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.974 01:13:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.974 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.974 "name": "raid_bdev1", 00:12:12.974 "uuid": "6dbd7b0f-8309-4716-871e-40f487f3cabb", 00:12:12.974 "strip_size_kb": 0, 00:12:12.974 "state": "online", 00:12:12.974 "raid_level": "raid1", 00:12:12.974 "superblock": false, 00:12:12.974 "num_base_bdevs": 4, 00:12:12.974 "num_base_bdevs_discovered": 3, 00:12:12.974 "num_base_bdevs_operational": 3, 00:12:12.974 "base_bdevs_list": [ 00:12:12.974 { 00:12:12.974 "name": "spare", 00:12:12.974 "uuid": "7aa0ce08-c94e-5759-9fe2-2043dd94acb1", 00:12:12.974 "is_configured": true, 00:12:12.974 "data_offset": 0, 00:12:12.974 "data_size": 65536 00:12:12.975 }, 00:12:12.975 { 00:12:12.975 "name": null, 00:12:12.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.975 "is_configured": false, 00:12:12.975 "data_offset": 0, 00:12:12.975 "data_size": 65536 00:12:12.975 }, 00:12:12.975 { 00:12:12.975 "name": "BaseBdev3", 00:12:12.975 "uuid": "080e87ae-ea63-57a3-8315-7aee74d783d4", 00:12:12.975 "is_configured": true, 00:12:12.975 "data_offset": 0, 00:12:12.975 "data_size": 65536 00:12:12.975 }, 00:12:12.975 { 00:12:12.975 "name": "BaseBdev4", 00:12:12.975 "uuid": "fd0d9a13-a10b-5aa8-87a7-22b1c6235a22", 00:12:12.975 "is_configured": true, 00:12:12.975 "data_offset": 0, 00:12:12.975 "data_size": 65536 00:12:12.975 } 00:12:12.975 ] 00:12:12.975 }' 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.975 "name": "raid_bdev1", 00:12:12.975 "uuid": "6dbd7b0f-8309-4716-871e-40f487f3cabb", 00:12:12.975 "strip_size_kb": 0, 00:12:12.975 "state": "online", 00:12:12.975 "raid_level": "raid1", 00:12:12.975 "superblock": false, 00:12:12.975 "num_base_bdevs": 4, 00:12:12.975 "num_base_bdevs_discovered": 3, 00:12:12.975 "num_base_bdevs_operational": 3, 00:12:12.975 "base_bdevs_list": [ 00:12:12.975 { 00:12:12.975 "name": "spare", 00:12:12.975 "uuid": "7aa0ce08-c94e-5759-9fe2-2043dd94acb1", 00:12:12.975 "is_configured": true, 00:12:12.975 "data_offset": 0, 00:12:12.975 "data_size": 65536 00:12:12.975 }, 00:12:12.975 { 00:12:12.975 "name": null, 00:12:12.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.975 "is_configured": false, 00:12:12.975 "data_offset": 0, 00:12:12.975 "data_size": 65536 00:12:12.975 }, 00:12:12.975 { 00:12:12.975 "name": "BaseBdev3", 00:12:12.975 "uuid": "080e87ae-ea63-57a3-8315-7aee74d783d4", 00:12:12.975 "is_configured": true, 00:12:12.975 "data_offset": 0, 00:12:12.975 "data_size": 65536 00:12:12.975 }, 00:12:12.975 { 00:12:12.975 "name": "BaseBdev4", 00:12:12.975 "uuid": "fd0d9a13-a10b-5aa8-87a7-22b1c6235a22", 00:12:12.975 "is_configured": true, 00:12:12.975 "data_offset": 0, 00:12:12.975 "data_size": 65536 00:12:12.975 } 00:12:12.975 ] 00:12:12.975 }' 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.975 "name": "raid_bdev1", 00:12:12.975 "uuid": "6dbd7b0f-8309-4716-871e-40f487f3cabb", 00:12:12.975 "strip_size_kb": 0, 00:12:12.975 "state": "online", 00:12:12.975 "raid_level": "raid1", 00:12:12.975 "superblock": false, 00:12:12.975 "num_base_bdevs": 4, 00:12:12.975 "num_base_bdevs_discovered": 3, 00:12:12.975 "num_base_bdevs_operational": 3, 00:12:12.975 "base_bdevs_list": [ 00:12:12.975 { 00:12:12.975 "name": "spare", 00:12:12.975 "uuid": "7aa0ce08-c94e-5759-9fe2-2043dd94acb1", 00:12:12.975 "is_configured": true, 00:12:12.975 "data_offset": 0, 00:12:12.975 "data_size": 65536 00:12:12.975 }, 00:12:12.975 { 00:12:12.975 "name": null, 00:12:12.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.975 "is_configured": false, 00:12:12.975 "data_offset": 0, 00:12:12.975 "data_size": 65536 00:12:12.975 }, 00:12:12.975 { 00:12:12.975 "name": "BaseBdev3", 00:12:12.975 "uuid": "080e87ae-ea63-57a3-8315-7aee74d783d4", 00:12:12.975 "is_configured": true, 00:12:12.975 "data_offset": 0, 00:12:12.975 "data_size": 65536 00:12:12.975 }, 00:12:12.975 { 00:12:12.975 "name": "BaseBdev4", 00:12:12.975 "uuid": "fd0d9a13-a10b-5aa8-87a7-22b1c6235a22", 00:12:12.975 "is_configured": true, 00:12:12.975 "data_offset": 0, 00:12:12.975 "data_size": 65536 00:12:12.975 } 00:12:12.975 ] 00:12:12.975 }' 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.975 01:13:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.542 [2024-10-15 01:13:26.066803] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:13.542 [2024-10-15 01:13:26.066830] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:13.542 [2024-10-15 01:13:26.066921] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:13.542 [2024-10-15 01:13:26.066996] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:13.542 [2024-10-15 01:13:26.067010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:13.542 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:13.801 /dev/nbd0 00:12:13.801 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:13.801 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:13.801 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:13.801 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:13.801 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:13.801 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:13.801 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:13.801 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:13.801 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:13.801 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:13.801 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.801 1+0 records in 00:12:13.801 1+0 records out 00:12:13.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517921 s, 7.9 MB/s 00:12:13.801 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.801 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:13.801 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.801 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:13.801 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:13.801 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.801 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:13.801 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:14.060 /dev/nbd1 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:14.060 1+0 records in 00:12:14.060 1+0 records out 00:12:14.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376375 s, 10.9 MB/s 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:14.060 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:14.061 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.061 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:14.320 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:14.320 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:14.320 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:14.320 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.320 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.320 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:14.320 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:14.320 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.320 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.320 01:13:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:14.580 01:13:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:14.580 01:13:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:14.580 01:13:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:14.580 01:13:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.580 01:13:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.580 01:13:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:14.580 01:13:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:14.580 01:13:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.580 01:13:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:14.580 01:13:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 87922 00:12:14.580 01:13:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 87922 ']' 00:12:14.580 01:13:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 87922 00:12:14.580 01:13:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:12:14.580 01:13:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:14.580 01:13:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87922 00:12:14.580 01:13:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:14.580 01:13:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:14.580 01:13:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87922' 00:12:14.580 killing process with pid 87922 00:12:14.580 01:13:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 87922 00:12:14.580 Received shutdown signal, test time was about 60.000000 seconds 00:12:14.580 00:12:14.580 Latency(us) 00:12:14.580 [2024-10-15T01:13:27.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.580 [2024-10-15T01:13:27.304Z] =================================================================================================================== 00:12:14.580 [2024-10-15T01:13:27.304Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:14.580 [2024-10-15 01:13:27.139404] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:14.580 01:13:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 87922 00:12:14.580 [2024-10-15 01:13:27.190750] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:14.841 ************************************ 00:12:14.841 END TEST raid_rebuild_test 00:12:14.841 ************************************ 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:14.841 00:12:14.841 real 0m15.199s 00:12:14.841 user 0m17.487s 00:12:14.841 sys 0m2.948s 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.841 01:13:27 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:12:14.841 01:13:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:14.841 01:13:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:14.841 01:13:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:14.841 ************************************ 00:12:14.841 START TEST raid_rebuild_test_sb 00:12:14.841 ************************************ 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88356 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88356 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 88356 ']' 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:14.841 01:13:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.841 [2024-10-15 01:13:27.561096] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:12:14.841 [2024-10-15 01:13:27.561329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88356 ] 00:12:14.841 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:14.841 Zero copy mechanism will not be used. 00:12:15.101 [2024-10-15 01:13:27.704697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.101 [2024-10-15 01:13:27.731483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.101 [2024-10-15 01:13:27.774330] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.101 [2024-10-15 01:13:27.774447] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.043 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:16.043 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:16.043 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:16.043 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:16.043 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.043 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.043 BaseBdev1_malloc 00:12:16.043 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.043 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:16.043 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.043 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.043 [2024-10-15 01:13:28.425350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:16.043 [2024-10-15 01:13:28.425414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.043 [2024-10-15 01:13:28.425452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:16.043 [2024-10-15 01:13:28.425464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.043 [2024-10-15 01:13:28.427533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.043 [2024-10-15 01:13:28.427581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:16.043 BaseBdev1 00:12:16.043 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.043 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:16.043 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:16.043 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.043 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.043 BaseBdev2_malloc 00:12:16.043 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.043 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:16.043 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.043 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.043 [2024-10-15 01:13:28.454195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:16.043 [2024-10-15 01:13:28.454246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.043 [2024-10-15 01:13:28.454266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:16.043 [2024-10-15 01:13:28.454275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.043 [2024-10-15 01:13:28.456544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.043 [2024-10-15 01:13:28.456630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:16.043 BaseBdev2 00:12:16.043 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.043 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 BaseBdev3_malloc 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 [2024-10-15 01:13:28.483140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:16.044 [2024-10-15 01:13:28.483246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.044 [2024-10-15 01:13:28.483274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:16.044 [2024-10-15 01:13:28.483284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.044 [2024-10-15 01:13:28.485438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.044 [2024-10-15 01:13:28.485460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:16.044 BaseBdev3 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 BaseBdev4_malloc 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 [2024-10-15 01:13:28.527621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:16.044 [2024-10-15 01:13:28.527700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.044 [2024-10-15 01:13:28.527735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:16.044 [2024-10-15 01:13:28.527747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.044 [2024-10-15 01:13:28.530414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.044 [2024-10-15 01:13:28.530494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:16.044 BaseBdev4 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 spare_malloc 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 spare_delay 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 [2024-10-15 01:13:28.565021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:16.044 [2024-10-15 01:13:28.565074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.044 [2024-10-15 01:13:28.565096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:16.044 [2024-10-15 01:13:28.565105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.044 [2024-10-15 01:13:28.567348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.044 [2024-10-15 01:13:28.567384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:16.044 spare 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 [2024-10-15 01:13:28.573100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.044 [2024-10-15 01:13:28.575065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:16.044 [2024-10-15 01:13:28.575129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:16.044 [2024-10-15 01:13:28.575196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:16.044 [2024-10-15 01:13:28.575420] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:16.044 [2024-10-15 01:13:28.575434] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:16.044 [2024-10-15 01:13:28.575716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:16.044 [2024-10-15 01:13:28.575865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:16.044 [2024-10-15 01:13:28.575889] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:16.044 [2024-10-15 01:13:28.576007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.044 "name": "raid_bdev1", 00:12:16.044 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:16.044 "strip_size_kb": 0, 00:12:16.044 "state": "online", 00:12:16.044 "raid_level": "raid1", 00:12:16.044 "superblock": true, 00:12:16.044 "num_base_bdevs": 4, 00:12:16.044 "num_base_bdevs_discovered": 4, 00:12:16.044 "num_base_bdevs_operational": 4, 00:12:16.044 "base_bdevs_list": [ 00:12:16.044 { 00:12:16.044 "name": "BaseBdev1", 00:12:16.044 "uuid": "812204f7-dbb7-563b-a7fa-1eedeb43d5f2", 00:12:16.044 "is_configured": true, 00:12:16.044 "data_offset": 2048, 00:12:16.044 "data_size": 63488 00:12:16.044 }, 00:12:16.044 { 00:12:16.044 "name": "BaseBdev2", 00:12:16.044 "uuid": "e8a3d2f7-7771-55e5-b143-cb24cb4ebf5b", 00:12:16.044 "is_configured": true, 00:12:16.044 "data_offset": 2048, 00:12:16.044 "data_size": 63488 00:12:16.044 }, 00:12:16.044 { 00:12:16.044 "name": "BaseBdev3", 00:12:16.044 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:16.044 "is_configured": true, 00:12:16.044 "data_offset": 2048, 00:12:16.044 "data_size": 63488 00:12:16.044 }, 00:12:16.044 { 00:12:16.044 "name": "BaseBdev4", 00:12:16.044 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:16.044 "is_configured": true, 00:12:16.044 "data_offset": 2048, 00:12:16.044 "data_size": 63488 00:12:16.044 } 00:12:16.044 ] 00:12:16.044 }' 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.044 01:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.304 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:16.304 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:16.304 01:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.304 01:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.304 [2024-10-15 01:13:29.024623] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.564 01:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.564 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:16.564 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.564 01:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.564 01:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.564 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:16.564 01:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.564 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:16.564 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:16.564 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:16.564 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:16.564 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:16.564 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:16.564 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:16.564 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:16.564 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:16.564 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:16.564 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:16.564 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:16.564 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:16.564 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:16.823 [2024-10-15 01:13:29.307917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:12:16.823 /dev/nbd0 00:12:16.823 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:16.823 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:16.823 01:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:16.823 01:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:16.823 01:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:16.823 01:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:16.823 01:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:16.823 01:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:16.823 01:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:16.823 01:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:16.823 01:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:16.823 1+0 records in 00:12:16.823 1+0 records out 00:12:16.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321272 s, 12.7 MB/s 00:12:16.823 01:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.823 01:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:16.823 01:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.823 01:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:16.823 01:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:16.823 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:16.823 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:16.823 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:16.823 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:16.823 01:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:22.108 63488+0 records in 00:12:22.108 63488+0 records out 00:12:22.108 32505856 bytes (33 MB, 31 MiB) copied, 4.90019 s, 6.6 MB/s 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:22.108 [2024-10-15 01:13:34.497673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.108 [2024-10-15 01:13:34.514544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.108 "name": "raid_bdev1", 00:12:22.108 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:22.108 "strip_size_kb": 0, 00:12:22.108 "state": "online", 00:12:22.108 "raid_level": "raid1", 00:12:22.108 "superblock": true, 00:12:22.108 "num_base_bdevs": 4, 00:12:22.108 "num_base_bdevs_discovered": 3, 00:12:22.108 "num_base_bdevs_operational": 3, 00:12:22.108 "base_bdevs_list": [ 00:12:22.108 { 00:12:22.108 "name": null, 00:12:22.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.108 "is_configured": false, 00:12:22.108 "data_offset": 0, 00:12:22.108 "data_size": 63488 00:12:22.108 }, 00:12:22.108 { 00:12:22.108 "name": "BaseBdev2", 00:12:22.108 "uuid": "e8a3d2f7-7771-55e5-b143-cb24cb4ebf5b", 00:12:22.108 "is_configured": true, 00:12:22.108 "data_offset": 2048, 00:12:22.108 "data_size": 63488 00:12:22.108 }, 00:12:22.108 { 00:12:22.108 "name": "BaseBdev3", 00:12:22.108 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:22.108 "is_configured": true, 00:12:22.108 "data_offset": 2048, 00:12:22.108 "data_size": 63488 00:12:22.108 }, 00:12:22.108 { 00:12:22.108 "name": "BaseBdev4", 00:12:22.108 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:22.108 "is_configured": true, 00:12:22.108 "data_offset": 2048, 00:12:22.108 "data_size": 63488 00:12:22.108 } 00:12:22.108 ] 00:12:22.108 }' 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.108 01:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.367 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:22.367 01:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.367 01:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.367 [2024-10-15 01:13:34.981778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:22.367 [2024-10-15 01:13:34.986136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e420 00:12:22.367 01:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.367 [2024-10-15 01:13:34.988181] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:22.367 01:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:23.305 01:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:23.305 01:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.305 01:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:23.305 01:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:23.305 01:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.305 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.305 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.305 01:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.305 01:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.305 01:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.565 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.565 "name": "raid_bdev1", 00:12:23.565 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:23.565 "strip_size_kb": 0, 00:12:23.565 "state": "online", 00:12:23.565 "raid_level": "raid1", 00:12:23.565 "superblock": true, 00:12:23.565 "num_base_bdevs": 4, 00:12:23.565 "num_base_bdevs_discovered": 4, 00:12:23.565 "num_base_bdevs_operational": 4, 00:12:23.565 "process": { 00:12:23.565 "type": "rebuild", 00:12:23.565 "target": "spare", 00:12:23.565 "progress": { 00:12:23.565 "blocks": 20480, 00:12:23.565 "percent": 32 00:12:23.565 } 00:12:23.565 }, 00:12:23.565 "base_bdevs_list": [ 00:12:23.565 { 00:12:23.565 "name": "spare", 00:12:23.565 "uuid": "6098859b-45ea-59b7-bd8b-886bb999a04d", 00:12:23.565 "is_configured": true, 00:12:23.565 "data_offset": 2048, 00:12:23.565 "data_size": 63488 00:12:23.565 }, 00:12:23.565 { 00:12:23.565 "name": "BaseBdev2", 00:12:23.565 "uuid": "e8a3d2f7-7771-55e5-b143-cb24cb4ebf5b", 00:12:23.565 "is_configured": true, 00:12:23.565 "data_offset": 2048, 00:12:23.565 "data_size": 63488 00:12:23.565 }, 00:12:23.565 { 00:12:23.565 "name": "BaseBdev3", 00:12:23.565 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:23.565 "is_configured": true, 00:12:23.565 "data_offset": 2048, 00:12:23.565 "data_size": 63488 00:12:23.565 }, 00:12:23.565 { 00:12:23.565 "name": "BaseBdev4", 00:12:23.565 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:23.565 "is_configured": true, 00:12:23.565 "data_offset": 2048, 00:12:23.565 "data_size": 63488 00:12:23.565 } 00:12:23.565 ] 00:12:23.565 }' 00:12:23.565 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.565 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:23.565 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.565 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:23.565 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:23.565 01:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.565 01:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.565 [2024-10-15 01:13:36.129334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:23.565 [2024-10-15 01:13:36.193226] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:23.565 [2024-10-15 01:13:36.193331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.565 [2024-10-15 01:13:36.193368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:23.565 [2024-10-15 01:13:36.193376] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:23.565 01:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.565 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:23.565 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.565 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.565 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.565 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.566 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.566 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.566 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.566 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.566 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.566 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.566 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.566 01:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.566 01:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.566 01:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.566 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.566 "name": "raid_bdev1", 00:12:23.566 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:23.566 "strip_size_kb": 0, 00:12:23.566 "state": "online", 00:12:23.566 "raid_level": "raid1", 00:12:23.566 "superblock": true, 00:12:23.566 "num_base_bdevs": 4, 00:12:23.566 "num_base_bdevs_discovered": 3, 00:12:23.566 "num_base_bdevs_operational": 3, 00:12:23.566 "base_bdevs_list": [ 00:12:23.566 { 00:12:23.566 "name": null, 00:12:23.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.566 "is_configured": false, 00:12:23.566 "data_offset": 0, 00:12:23.566 "data_size": 63488 00:12:23.566 }, 00:12:23.566 { 00:12:23.566 "name": "BaseBdev2", 00:12:23.566 "uuid": "e8a3d2f7-7771-55e5-b143-cb24cb4ebf5b", 00:12:23.566 "is_configured": true, 00:12:23.566 "data_offset": 2048, 00:12:23.566 "data_size": 63488 00:12:23.566 }, 00:12:23.566 { 00:12:23.566 "name": "BaseBdev3", 00:12:23.566 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:23.566 "is_configured": true, 00:12:23.566 "data_offset": 2048, 00:12:23.566 "data_size": 63488 00:12:23.566 }, 00:12:23.566 { 00:12:23.566 "name": "BaseBdev4", 00:12:23.566 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:23.566 "is_configured": true, 00:12:23.566 "data_offset": 2048, 00:12:23.566 "data_size": 63488 00:12:23.566 } 00:12:23.566 ] 00:12:23.566 }' 00:12:23.566 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.566 01:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.136 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:24.136 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.136 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:24.136 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:24.136 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.136 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.136 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.136 01:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.136 01:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.136 01:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.136 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.136 "name": "raid_bdev1", 00:12:24.136 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:24.136 "strip_size_kb": 0, 00:12:24.136 "state": "online", 00:12:24.136 "raid_level": "raid1", 00:12:24.136 "superblock": true, 00:12:24.136 "num_base_bdevs": 4, 00:12:24.136 "num_base_bdevs_discovered": 3, 00:12:24.136 "num_base_bdevs_operational": 3, 00:12:24.136 "base_bdevs_list": [ 00:12:24.136 { 00:12:24.136 "name": null, 00:12:24.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.136 "is_configured": false, 00:12:24.136 "data_offset": 0, 00:12:24.136 "data_size": 63488 00:12:24.136 }, 00:12:24.136 { 00:12:24.136 "name": "BaseBdev2", 00:12:24.136 "uuid": "e8a3d2f7-7771-55e5-b143-cb24cb4ebf5b", 00:12:24.136 "is_configured": true, 00:12:24.136 "data_offset": 2048, 00:12:24.136 "data_size": 63488 00:12:24.136 }, 00:12:24.136 { 00:12:24.136 "name": "BaseBdev3", 00:12:24.136 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:24.136 "is_configured": true, 00:12:24.136 "data_offset": 2048, 00:12:24.136 "data_size": 63488 00:12:24.136 }, 00:12:24.136 { 00:12:24.136 "name": "BaseBdev4", 00:12:24.136 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:24.136 "is_configured": true, 00:12:24.136 "data_offset": 2048, 00:12:24.136 "data_size": 63488 00:12:24.136 } 00:12:24.136 ] 00:12:24.136 }' 00:12:24.136 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.136 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:24.136 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.136 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:24.136 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:24.136 01:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.136 01:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.136 [2024-10-15 01:13:36.697209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:24.136 [2024-10-15 01:13:36.701528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e4f0 00:12:24.136 01:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.136 01:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:24.136 [2024-10-15 01:13:36.703379] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:25.074 01:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.074 01:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.074 01:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.074 01:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.074 01:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.074 01:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.074 01:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.074 01:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.074 01:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.074 01:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.074 01:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.074 "name": "raid_bdev1", 00:12:25.074 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:25.074 "strip_size_kb": 0, 00:12:25.074 "state": "online", 00:12:25.074 "raid_level": "raid1", 00:12:25.074 "superblock": true, 00:12:25.074 "num_base_bdevs": 4, 00:12:25.074 "num_base_bdevs_discovered": 4, 00:12:25.074 "num_base_bdevs_operational": 4, 00:12:25.074 "process": { 00:12:25.074 "type": "rebuild", 00:12:25.074 "target": "spare", 00:12:25.074 "progress": { 00:12:25.074 "blocks": 20480, 00:12:25.074 "percent": 32 00:12:25.074 } 00:12:25.074 }, 00:12:25.074 "base_bdevs_list": [ 00:12:25.074 { 00:12:25.074 "name": "spare", 00:12:25.074 "uuid": "6098859b-45ea-59b7-bd8b-886bb999a04d", 00:12:25.074 "is_configured": true, 00:12:25.074 "data_offset": 2048, 00:12:25.074 "data_size": 63488 00:12:25.074 }, 00:12:25.074 { 00:12:25.074 "name": "BaseBdev2", 00:12:25.074 "uuid": "e8a3d2f7-7771-55e5-b143-cb24cb4ebf5b", 00:12:25.074 "is_configured": true, 00:12:25.074 "data_offset": 2048, 00:12:25.074 "data_size": 63488 00:12:25.074 }, 00:12:25.074 { 00:12:25.074 "name": "BaseBdev3", 00:12:25.074 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:25.074 "is_configured": true, 00:12:25.074 "data_offset": 2048, 00:12:25.074 "data_size": 63488 00:12:25.074 }, 00:12:25.074 { 00:12:25.074 "name": "BaseBdev4", 00:12:25.074 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:25.074 "is_configured": true, 00:12:25.074 "data_offset": 2048, 00:12:25.074 "data_size": 63488 00:12:25.074 } 00:12:25.074 ] 00:12:25.074 }' 00:12:25.074 01:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.334 01:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.334 01:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.334 01:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.334 01:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:25.334 01:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:25.334 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:25.334 01:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:25.334 01:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:25.334 01:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:25.334 01:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:25.334 01:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.334 01:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.334 [2024-10-15 01:13:37.864159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:25.334 [2024-10-15 01:13:38.007955] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000c3e4f0 00:12:25.334 01:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.334 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:25.334 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:25.334 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.334 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.334 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.334 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.334 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.334 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.334 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.334 01:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.334 01:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.334 01:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.593 "name": "raid_bdev1", 00:12:25.593 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:25.593 "strip_size_kb": 0, 00:12:25.593 "state": "online", 00:12:25.593 "raid_level": "raid1", 00:12:25.593 "superblock": true, 00:12:25.593 "num_base_bdevs": 4, 00:12:25.593 "num_base_bdevs_discovered": 3, 00:12:25.593 "num_base_bdevs_operational": 3, 00:12:25.593 "process": { 00:12:25.593 "type": "rebuild", 00:12:25.593 "target": "spare", 00:12:25.593 "progress": { 00:12:25.593 "blocks": 24576, 00:12:25.593 "percent": 38 00:12:25.593 } 00:12:25.593 }, 00:12:25.593 "base_bdevs_list": [ 00:12:25.593 { 00:12:25.593 "name": "spare", 00:12:25.593 "uuid": "6098859b-45ea-59b7-bd8b-886bb999a04d", 00:12:25.593 "is_configured": true, 00:12:25.593 "data_offset": 2048, 00:12:25.593 "data_size": 63488 00:12:25.593 }, 00:12:25.593 { 00:12:25.593 "name": null, 00:12:25.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.593 "is_configured": false, 00:12:25.593 "data_offset": 0, 00:12:25.593 "data_size": 63488 00:12:25.593 }, 00:12:25.593 { 00:12:25.593 "name": "BaseBdev3", 00:12:25.593 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:25.593 "is_configured": true, 00:12:25.593 "data_offset": 2048, 00:12:25.593 "data_size": 63488 00:12:25.593 }, 00:12:25.593 { 00:12:25.593 "name": "BaseBdev4", 00:12:25.593 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:25.593 "is_configured": true, 00:12:25.593 "data_offset": 2048, 00:12:25.593 "data_size": 63488 00:12:25.593 } 00:12:25.593 ] 00:12:25.593 }' 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=370 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.593 "name": "raid_bdev1", 00:12:25.593 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:25.593 "strip_size_kb": 0, 00:12:25.593 "state": "online", 00:12:25.593 "raid_level": "raid1", 00:12:25.593 "superblock": true, 00:12:25.593 "num_base_bdevs": 4, 00:12:25.593 "num_base_bdevs_discovered": 3, 00:12:25.593 "num_base_bdevs_operational": 3, 00:12:25.593 "process": { 00:12:25.593 "type": "rebuild", 00:12:25.593 "target": "spare", 00:12:25.593 "progress": { 00:12:25.593 "blocks": 26624, 00:12:25.593 "percent": 41 00:12:25.593 } 00:12:25.593 }, 00:12:25.593 "base_bdevs_list": [ 00:12:25.593 { 00:12:25.593 "name": "spare", 00:12:25.593 "uuid": "6098859b-45ea-59b7-bd8b-886bb999a04d", 00:12:25.593 "is_configured": true, 00:12:25.593 "data_offset": 2048, 00:12:25.593 "data_size": 63488 00:12:25.593 }, 00:12:25.593 { 00:12:25.593 "name": null, 00:12:25.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.593 "is_configured": false, 00:12:25.593 "data_offset": 0, 00:12:25.593 "data_size": 63488 00:12:25.593 }, 00:12:25.593 { 00:12:25.593 "name": "BaseBdev3", 00:12:25.593 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:25.593 "is_configured": true, 00:12:25.593 "data_offset": 2048, 00:12:25.593 "data_size": 63488 00:12:25.593 }, 00:12:25.593 { 00:12:25.593 "name": "BaseBdev4", 00:12:25.593 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:25.593 "is_configured": true, 00:12:25.593 "data_offset": 2048, 00:12:25.593 "data_size": 63488 00:12:25.593 } 00:12:25.593 ] 00:12:25.593 }' 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.593 01:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:26.973 01:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:26.973 01:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.973 01:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.973 01:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.973 01:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.973 01:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.973 01:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.973 01:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.973 01:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.973 01:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.973 01:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.973 01:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.973 "name": "raid_bdev1", 00:12:26.973 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:26.973 "strip_size_kb": 0, 00:12:26.973 "state": "online", 00:12:26.973 "raid_level": "raid1", 00:12:26.973 "superblock": true, 00:12:26.973 "num_base_bdevs": 4, 00:12:26.973 "num_base_bdevs_discovered": 3, 00:12:26.973 "num_base_bdevs_operational": 3, 00:12:26.973 "process": { 00:12:26.973 "type": "rebuild", 00:12:26.973 "target": "spare", 00:12:26.973 "progress": { 00:12:26.973 "blocks": 49152, 00:12:26.973 "percent": 77 00:12:26.973 } 00:12:26.973 }, 00:12:26.973 "base_bdevs_list": [ 00:12:26.973 { 00:12:26.973 "name": "spare", 00:12:26.973 "uuid": "6098859b-45ea-59b7-bd8b-886bb999a04d", 00:12:26.973 "is_configured": true, 00:12:26.973 "data_offset": 2048, 00:12:26.973 "data_size": 63488 00:12:26.973 }, 00:12:26.973 { 00:12:26.973 "name": null, 00:12:26.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.973 "is_configured": false, 00:12:26.973 "data_offset": 0, 00:12:26.973 "data_size": 63488 00:12:26.973 }, 00:12:26.973 { 00:12:26.973 "name": "BaseBdev3", 00:12:26.973 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:26.973 "is_configured": true, 00:12:26.973 "data_offset": 2048, 00:12:26.973 "data_size": 63488 00:12:26.973 }, 00:12:26.973 { 00:12:26.973 "name": "BaseBdev4", 00:12:26.973 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:26.973 "is_configured": true, 00:12:26.973 "data_offset": 2048, 00:12:26.973 "data_size": 63488 00:12:26.973 } 00:12:26.973 ] 00:12:26.973 }' 00:12:26.973 01:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.973 01:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:26.973 01:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.973 01:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.973 01:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:27.241 [2024-10-15 01:13:39.915519] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:27.241 [2024-10-15 01:13:39.915739] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:27.241 [2024-10-15 01:13:39.915882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.827 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:27.827 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.827 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.827 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.827 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.827 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.827 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.827 01:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.827 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.827 01:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.827 01:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.827 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.827 "name": "raid_bdev1", 00:12:27.827 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:27.827 "strip_size_kb": 0, 00:12:27.827 "state": "online", 00:12:27.827 "raid_level": "raid1", 00:12:27.827 "superblock": true, 00:12:27.827 "num_base_bdevs": 4, 00:12:27.827 "num_base_bdevs_discovered": 3, 00:12:27.827 "num_base_bdevs_operational": 3, 00:12:27.827 "base_bdevs_list": [ 00:12:27.827 { 00:12:27.827 "name": "spare", 00:12:27.827 "uuid": "6098859b-45ea-59b7-bd8b-886bb999a04d", 00:12:27.827 "is_configured": true, 00:12:27.827 "data_offset": 2048, 00:12:27.827 "data_size": 63488 00:12:27.827 }, 00:12:27.827 { 00:12:27.827 "name": null, 00:12:27.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.827 "is_configured": false, 00:12:27.827 "data_offset": 0, 00:12:27.827 "data_size": 63488 00:12:27.827 }, 00:12:27.827 { 00:12:27.827 "name": "BaseBdev3", 00:12:27.827 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:27.827 "is_configured": true, 00:12:27.827 "data_offset": 2048, 00:12:27.827 "data_size": 63488 00:12:27.827 }, 00:12:27.827 { 00:12:27.827 "name": "BaseBdev4", 00:12:27.827 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:27.827 "is_configured": true, 00:12:27.827 "data_offset": 2048, 00:12:27.827 "data_size": 63488 00:12:27.827 } 00:12:27.827 ] 00:12:27.827 }' 00:12:27.827 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.087 "name": "raid_bdev1", 00:12:28.087 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:28.087 "strip_size_kb": 0, 00:12:28.087 "state": "online", 00:12:28.087 "raid_level": "raid1", 00:12:28.087 "superblock": true, 00:12:28.087 "num_base_bdevs": 4, 00:12:28.087 "num_base_bdevs_discovered": 3, 00:12:28.087 "num_base_bdevs_operational": 3, 00:12:28.087 "base_bdevs_list": [ 00:12:28.087 { 00:12:28.087 "name": "spare", 00:12:28.087 "uuid": "6098859b-45ea-59b7-bd8b-886bb999a04d", 00:12:28.087 "is_configured": true, 00:12:28.087 "data_offset": 2048, 00:12:28.087 "data_size": 63488 00:12:28.087 }, 00:12:28.087 { 00:12:28.087 "name": null, 00:12:28.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.087 "is_configured": false, 00:12:28.087 "data_offset": 0, 00:12:28.087 "data_size": 63488 00:12:28.087 }, 00:12:28.087 { 00:12:28.087 "name": "BaseBdev3", 00:12:28.087 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:28.087 "is_configured": true, 00:12:28.087 "data_offset": 2048, 00:12:28.087 "data_size": 63488 00:12:28.087 }, 00:12:28.087 { 00:12:28.087 "name": "BaseBdev4", 00:12:28.087 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:28.087 "is_configured": true, 00:12:28.087 "data_offset": 2048, 00:12:28.087 "data_size": 63488 00:12:28.087 } 00:12:28.087 ] 00:12:28.087 }' 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.087 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.087 "name": "raid_bdev1", 00:12:28.087 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:28.087 "strip_size_kb": 0, 00:12:28.087 "state": "online", 00:12:28.087 "raid_level": "raid1", 00:12:28.087 "superblock": true, 00:12:28.087 "num_base_bdevs": 4, 00:12:28.087 "num_base_bdevs_discovered": 3, 00:12:28.087 "num_base_bdevs_operational": 3, 00:12:28.087 "base_bdevs_list": [ 00:12:28.087 { 00:12:28.087 "name": "spare", 00:12:28.087 "uuid": "6098859b-45ea-59b7-bd8b-886bb999a04d", 00:12:28.087 "is_configured": true, 00:12:28.087 "data_offset": 2048, 00:12:28.087 "data_size": 63488 00:12:28.087 }, 00:12:28.087 { 00:12:28.087 "name": null, 00:12:28.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.087 "is_configured": false, 00:12:28.087 "data_offset": 0, 00:12:28.088 "data_size": 63488 00:12:28.088 }, 00:12:28.088 { 00:12:28.088 "name": "BaseBdev3", 00:12:28.088 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:28.088 "is_configured": true, 00:12:28.088 "data_offset": 2048, 00:12:28.088 "data_size": 63488 00:12:28.088 }, 00:12:28.088 { 00:12:28.088 "name": "BaseBdev4", 00:12:28.088 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:28.088 "is_configured": true, 00:12:28.088 "data_offset": 2048, 00:12:28.088 "data_size": 63488 00:12:28.088 } 00:12:28.088 ] 00:12:28.088 }' 00:12:28.088 01:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.088 01:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.676 [2024-10-15 01:13:41.118399] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:28.676 [2024-10-15 01:13:41.118433] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:28.676 [2024-10-15 01:13:41.118533] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.676 [2024-10-15 01:13:41.118617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.676 [2024-10-15 01:13:41.118631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:28.676 /dev/nbd0 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:28.676 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.937 1+0 records in 00:12:28.937 1+0 records out 00:12:28.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607773 s, 6.7 MB/s 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:28.937 /dev/nbd1 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.937 1+0 records in 00:12:28.937 1+0 records out 00:12:28.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259691 s, 15.8 MB/s 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:28.937 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.196 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:29.196 01:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:29.196 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.196 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:29.196 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:29.196 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:29.196 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.196 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:29.196 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:29.196 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:29.196 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.196 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:29.456 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:29.456 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:29.456 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:29.456 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.456 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.456 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:29.456 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:29.456 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.456 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.456 01:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:29.456 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:29.456 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:29.456 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:29.456 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.456 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.456 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:29.456 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:29.456 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.456 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:29.456 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:29.456 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.456 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.456 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.456 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:29.456 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.456 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.456 [2024-10-15 01:13:42.168260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:29.456 [2024-10-15 01:13:42.168370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.456 [2024-10-15 01:13:42.168407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:29.456 [2024-10-15 01:13:42.168438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.456 [2024-10-15 01:13:42.170491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.456 [2024-10-15 01:13:42.170561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:29.456 [2024-10-15 01:13:42.170659] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:29.456 [2024-10-15 01:13:42.170727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:29.456 [2024-10-15 01:13:42.170844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:29.456 [2024-10-15 01:13:42.170930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:29.456 spare 00:12:29.456 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.456 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:29.456 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.456 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.717 [2024-10-15 01:13:42.270818] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:12:29.717 [2024-10-15 01:13:42.270846] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:29.717 [2024-10-15 01:13:42.271127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:12:29.717 [2024-10-15 01:13:42.271286] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:12:29.717 [2024-10-15 01:13:42.271298] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:12:29.717 [2024-10-15 01:13:42.271450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.717 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.717 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:29.717 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.717 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.717 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.717 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.717 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:29.717 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.717 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.717 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.717 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.717 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.717 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.717 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.717 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.717 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.717 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.717 "name": "raid_bdev1", 00:12:29.717 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:29.717 "strip_size_kb": 0, 00:12:29.717 "state": "online", 00:12:29.717 "raid_level": "raid1", 00:12:29.717 "superblock": true, 00:12:29.717 "num_base_bdevs": 4, 00:12:29.717 "num_base_bdevs_discovered": 3, 00:12:29.717 "num_base_bdevs_operational": 3, 00:12:29.717 "base_bdevs_list": [ 00:12:29.717 { 00:12:29.717 "name": "spare", 00:12:29.717 "uuid": "6098859b-45ea-59b7-bd8b-886bb999a04d", 00:12:29.717 "is_configured": true, 00:12:29.717 "data_offset": 2048, 00:12:29.717 "data_size": 63488 00:12:29.717 }, 00:12:29.717 { 00:12:29.717 "name": null, 00:12:29.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.717 "is_configured": false, 00:12:29.717 "data_offset": 2048, 00:12:29.717 "data_size": 63488 00:12:29.717 }, 00:12:29.717 { 00:12:29.717 "name": "BaseBdev3", 00:12:29.717 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:29.717 "is_configured": true, 00:12:29.717 "data_offset": 2048, 00:12:29.717 "data_size": 63488 00:12:29.717 }, 00:12:29.717 { 00:12:29.717 "name": "BaseBdev4", 00:12:29.717 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:29.717 "is_configured": true, 00:12:29.717 "data_offset": 2048, 00:12:29.717 "data_size": 63488 00:12:29.717 } 00:12:29.717 ] 00:12:29.717 }' 00:12:29.717 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.717 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.287 "name": "raid_bdev1", 00:12:30.287 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:30.287 "strip_size_kb": 0, 00:12:30.287 "state": "online", 00:12:30.287 "raid_level": "raid1", 00:12:30.287 "superblock": true, 00:12:30.287 "num_base_bdevs": 4, 00:12:30.287 "num_base_bdevs_discovered": 3, 00:12:30.287 "num_base_bdevs_operational": 3, 00:12:30.287 "base_bdevs_list": [ 00:12:30.287 { 00:12:30.287 "name": "spare", 00:12:30.287 "uuid": "6098859b-45ea-59b7-bd8b-886bb999a04d", 00:12:30.287 "is_configured": true, 00:12:30.287 "data_offset": 2048, 00:12:30.287 "data_size": 63488 00:12:30.287 }, 00:12:30.287 { 00:12:30.287 "name": null, 00:12:30.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.287 "is_configured": false, 00:12:30.287 "data_offset": 2048, 00:12:30.287 "data_size": 63488 00:12:30.287 }, 00:12:30.287 { 00:12:30.287 "name": "BaseBdev3", 00:12:30.287 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:30.287 "is_configured": true, 00:12:30.287 "data_offset": 2048, 00:12:30.287 "data_size": 63488 00:12:30.287 }, 00:12:30.287 { 00:12:30.287 "name": "BaseBdev4", 00:12:30.287 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:30.287 "is_configured": true, 00:12:30.287 "data_offset": 2048, 00:12:30.287 "data_size": 63488 00:12:30.287 } 00:12:30.287 ] 00:12:30.287 }' 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:30.287 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.288 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.288 [2024-10-15 01:13:42.867088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:30.288 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.288 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:30.288 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.288 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.288 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.288 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.288 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.288 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.288 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.288 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.288 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.288 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.288 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.288 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.288 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.288 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.288 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.288 "name": "raid_bdev1", 00:12:30.288 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:30.288 "strip_size_kb": 0, 00:12:30.288 "state": "online", 00:12:30.288 "raid_level": "raid1", 00:12:30.288 "superblock": true, 00:12:30.288 "num_base_bdevs": 4, 00:12:30.288 "num_base_bdevs_discovered": 2, 00:12:30.288 "num_base_bdevs_operational": 2, 00:12:30.288 "base_bdevs_list": [ 00:12:30.288 { 00:12:30.288 "name": null, 00:12:30.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.288 "is_configured": false, 00:12:30.288 "data_offset": 0, 00:12:30.288 "data_size": 63488 00:12:30.288 }, 00:12:30.288 { 00:12:30.288 "name": null, 00:12:30.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.288 "is_configured": false, 00:12:30.288 "data_offset": 2048, 00:12:30.288 "data_size": 63488 00:12:30.288 }, 00:12:30.288 { 00:12:30.288 "name": "BaseBdev3", 00:12:30.288 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:30.288 "is_configured": true, 00:12:30.288 "data_offset": 2048, 00:12:30.288 "data_size": 63488 00:12:30.288 }, 00:12:30.288 { 00:12:30.288 "name": "BaseBdev4", 00:12:30.288 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:30.288 "is_configured": true, 00:12:30.288 "data_offset": 2048, 00:12:30.288 "data_size": 63488 00:12:30.288 } 00:12:30.288 ] 00:12:30.288 }' 00:12:30.288 01:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.288 01:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.857 01:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:30.857 01:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.857 01:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.858 [2024-10-15 01:13:43.294390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:30.858 [2024-10-15 01:13:43.294581] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:30.858 [2024-10-15 01:13:43.294601] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:30.858 [2024-10-15 01:13:43.294649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:30.858 [2024-10-15 01:13:43.298807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caebd0 00:12:30.858 01:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.858 01:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:30.858 [2024-10-15 01:13:43.300813] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:31.796 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.796 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.796 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.796 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.796 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.796 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.796 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.796 01:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.796 01:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.796 01:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.796 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.796 "name": "raid_bdev1", 00:12:31.796 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:31.796 "strip_size_kb": 0, 00:12:31.796 "state": "online", 00:12:31.796 "raid_level": "raid1", 00:12:31.796 "superblock": true, 00:12:31.796 "num_base_bdevs": 4, 00:12:31.796 "num_base_bdevs_discovered": 3, 00:12:31.796 "num_base_bdevs_operational": 3, 00:12:31.796 "process": { 00:12:31.796 "type": "rebuild", 00:12:31.796 "target": "spare", 00:12:31.796 "progress": { 00:12:31.796 "blocks": 20480, 00:12:31.796 "percent": 32 00:12:31.796 } 00:12:31.796 }, 00:12:31.796 "base_bdevs_list": [ 00:12:31.796 { 00:12:31.796 "name": "spare", 00:12:31.796 "uuid": "6098859b-45ea-59b7-bd8b-886bb999a04d", 00:12:31.796 "is_configured": true, 00:12:31.796 "data_offset": 2048, 00:12:31.796 "data_size": 63488 00:12:31.796 }, 00:12:31.796 { 00:12:31.796 "name": null, 00:12:31.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.796 "is_configured": false, 00:12:31.796 "data_offset": 2048, 00:12:31.796 "data_size": 63488 00:12:31.796 }, 00:12:31.796 { 00:12:31.796 "name": "BaseBdev3", 00:12:31.796 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:31.796 "is_configured": true, 00:12:31.796 "data_offset": 2048, 00:12:31.796 "data_size": 63488 00:12:31.796 }, 00:12:31.796 { 00:12:31.796 "name": "BaseBdev4", 00:12:31.796 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:31.796 "is_configured": true, 00:12:31.796 "data_offset": 2048, 00:12:31.796 "data_size": 63488 00:12:31.796 } 00:12:31.796 ] 00:12:31.796 }' 00:12:31.796 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.796 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.796 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.796 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.796 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:31.796 01:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.796 01:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.796 [2024-10-15 01:13:44.437345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:31.797 [2024-10-15 01:13:44.504933] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:31.797 [2024-10-15 01:13:44.505039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.797 [2024-10-15 01:13:44.505062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:31.797 [2024-10-15 01:13:44.505072] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:31.797 01:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.797 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:31.797 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.797 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.797 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.797 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.797 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:31.797 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.797 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.797 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.797 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.057 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.057 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.057 01:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.057 01:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.057 01:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.057 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.057 "name": "raid_bdev1", 00:12:32.057 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:32.057 "strip_size_kb": 0, 00:12:32.057 "state": "online", 00:12:32.057 "raid_level": "raid1", 00:12:32.057 "superblock": true, 00:12:32.057 "num_base_bdevs": 4, 00:12:32.057 "num_base_bdevs_discovered": 2, 00:12:32.057 "num_base_bdevs_operational": 2, 00:12:32.057 "base_bdevs_list": [ 00:12:32.057 { 00:12:32.057 "name": null, 00:12:32.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.057 "is_configured": false, 00:12:32.057 "data_offset": 0, 00:12:32.057 "data_size": 63488 00:12:32.057 }, 00:12:32.057 { 00:12:32.057 "name": null, 00:12:32.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.057 "is_configured": false, 00:12:32.057 "data_offset": 2048, 00:12:32.057 "data_size": 63488 00:12:32.057 }, 00:12:32.057 { 00:12:32.057 "name": "BaseBdev3", 00:12:32.057 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:32.057 "is_configured": true, 00:12:32.057 "data_offset": 2048, 00:12:32.057 "data_size": 63488 00:12:32.057 }, 00:12:32.057 { 00:12:32.057 "name": "BaseBdev4", 00:12:32.057 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:32.057 "is_configured": true, 00:12:32.057 "data_offset": 2048, 00:12:32.057 "data_size": 63488 00:12:32.057 } 00:12:32.057 ] 00:12:32.057 }' 00:12:32.057 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.057 01:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.317 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:32.317 01:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.317 01:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.317 [2024-10-15 01:13:44.932645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:32.317 [2024-10-15 01:13:44.932749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.317 [2024-10-15 01:13:44.932802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:12:32.317 [2024-10-15 01:13:44.932833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.317 [2024-10-15 01:13:44.933371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.317 [2024-10-15 01:13:44.933432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:32.317 [2024-10-15 01:13:44.933567] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:32.317 [2024-10-15 01:13:44.933616] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:32.317 [2024-10-15 01:13:44.933676] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:32.317 [2024-10-15 01:13:44.933736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:32.317 [2024-10-15 01:13:44.937764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeca0 00:12:32.317 spare 00:12:32.317 01:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.317 01:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:32.317 [2024-10-15 01:13:44.939695] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:33.258 01:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:33.258 01:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.258 01:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:33.258 01:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:33.258 01:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.258 01:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.258 01:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.258 01:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.258 01:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.258 01:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.518 01:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.518 "name": "raid_bdev1", 00:12:33.518 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:33.518 "strip_size_kb": 0, 00:12:33.518 "state": "online", 00:12:33.518 "raid_level": "raid1", 00:12:33.518 "superblock": true, 00:12:33.518 "num_base_bdevs": 4, 00:12:33.518 "num_base_bdevs_discovered": 3, 00:12:33.518 "num_base_bdevs_operational": 3, 00:12:33.518 "process": { 00:12:33.518 "type": "rebuild", 00:12:33.518 "target": "spare", 00:12:33.518 "progress": { 00:12:33.518 "blocks": 20480, 00:12:33.518 "percent": 32 00:12:33.518 } 00:12:33.518 }, 00:12:33.518 "base_bdevs_list": [ 00:12:33.518 { 00:12:33.518 "name": "spare", 00:12:33.518 "uuid": "6098859b-45ea-59b7-bd8b-886bb999a04d", 00:12:33.518 "is_configured": true, 00:12:33.518 "data_offset": 2048, 00:12:33.518 "data_size": 63488 00:12:33.518 }, 00:12:33.518 { 00:12:33.518 "name": null, 00:12:33.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.518 "is_configured": false, 00:12:33.518 "data_offset": 2048, 00:12:33.518 "data_size": 63488 00:12:33.518 }, 00:12:33.518 { 00:12:33.518 "name": "BaseBdev3", 00:12:33.518 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:33.518 "is_configured": true, 00:12:33.518 "data_offset": 2048, 00:12:33.518 "data_size": 63488 00:12:33.518 }, 00:12:33.518 { 00:12:33.518 "name": "BaseBdev4", 00:12:33.518 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:33.518 "is_configured": true, 00:12:33.518 "data_offset": 2048, 00:12:33.518 "data_size": 63488 00:12:33.518 } 00:12:33.518 ] 00:12:33.518 }' 00:12:33.518 01:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.518 [2024-10-15 01:13:46.076032] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:33.518 [2024-10-15 01:13:46.143848] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:33.518 [2024-10-15 01:13:46.143897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.518 [2024-10-15 01:13:46.143915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:33.518 [2024-10-15 01:13:46.143921] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.518 01:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.519 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.519 "name": "raid_bdev1", 00:12:33.519 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:33.519 "strip_size_kb": 0, 00:12:33.519 "state": "online", 00:12:33.519 "raid_level": "raid1", 00:12:33.519 "superblock": true, 00:12:33.519 "num_base_bdevs": 4, 00:12:33.519 "num_base_bdevs_discovered": 2, 00:12:33.519 "num_base_bdevs_operational": 2, 00:12:33.519 "base_bdevs_list": [ 00:12:33.519 { 00:12:33.519 "name": null, 00:12:33.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.519 "is_configured": false, 00:12:33.519 "data_offset": 0, 00:12:33.519 "data_size": 63488 00:12:33.519 }, 00:12:33.519 { 00:12:33.519 "name": null, 00:12:33.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.519 "is_configured": false, 00:12:33.519 "data_offset": 2048, 00:12:33.519 "data_size": 63488 00:12:33.519 }, 00:12:33.519 { 00:12:33.519 "name": "BaseBdev3", 00:12:33.519 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:33.519 "is_configured": true, 00:12:33.519 "data_offset": 2048, 00:12:33.519 "data_size": 63488 00:12:33.519 }, 00:12:33.519 { 00:12:33.519 "name": "BaseBdev4", 00:12:33.519 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:33.519 "is_configured": true, 00:12:33.519 "data_offset": 2048, 00:12:33.519 "data_size": 63488 00:12:33.519 } 00:12:33.519 ] 00:12:33.519 }' 00:12:33.519 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.519 01:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.088 "name": "raid_bdev1", 00:12:34.088 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:34.088 "strip_size_kb": 0, 00:12:34.088 "state": "online", 00:12:34.088 "raid_level": "raid1", 00:12:34.088 "superblock": true, 00:12:34.088 "num_base_bdevs": 4, 00:12:34.088 "num_base_bdevs_discovered": 2, 00:12:34.088 "num_base_bdevs_operational": 2, 00:12:34.088 "base_bdevs_list": [ 00:12:34.088 { 00:12:34.088 "name": null, 00:12:34.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.088 "is_configured": false, 00:12:34.088 "data_offset": 0, 00:12:34.088 "data_size": 63488 00:12:34.088 }, 00:12:34.088 { 00:12:34.088 "name": null, 00:12:34.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.088 "is_configured": false, 00:12:34.088 "data_offset": 2048, 00:12:34.088 "data_size": 63488 00:12:34.088 }, 00:12:34.088 { 00:12:34.088 "name": "BaseBdev3", 00:12:34.088 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:34.088 "is_configured": true, 00:12:34.088 "data_offset": 2048, 00:12:34.088 "data_size": 63488 00:12:34.088 }, 00:12:34.088 { 00:12:34.088 "name": "BaseBdev4", 00:12:34.088 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:34.088 "is_configured": true, 00:12:34.088 "data_offset": 2048, 00:12:34.088 "data_size": 63488 00:12:34.088 } 00:12:34.088 ] 00:12:34.088 }' 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.088 [2024-10-15 01:13:46.655308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:34.088 [2024-10-15 01:13:46.655359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.088 [2024-10-15 01:13:46.655381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:34.088 [2024-10-15 01:13:46.655390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.088 [2024-10-15 01:13:46.655806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.088 [2024-10-15 01:13:46.655823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:34.088 [2024-10-15 01:13:46.655895] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:34.088 [2024-10-15 01:13:46.655907] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:34.088 [2024-10-15 01:13:46.655918] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:34.088 [2024-10-15 01:13:46.655929] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:34.088 BaseBdev1 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.088 01:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:35.029 01:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:35.029 01:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.029 01:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.029 01:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.029 01:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.029 01:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:35.029 01:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.029 01:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.029 01:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.029 01:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.029 01:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.029 01:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.029 01:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.029 01:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.029 01:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.029 01:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.029 "name": "raid_bdev1", 00:12:35.029 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:35.029 "strip_size_kb": 0, 00:12:35.029 "state": "online", 00:12:35.029 "raid_level": "raid1", 00:12:35.029 "superblock": true, 00:12:35.029 "num_base_bdevs": 4, 00:12:35.029 "num_base_bdevs_discovered": 2, 00:12:35.029 "num_base_bdevs_operational": 2, 00:12:35.029 "base_bdevs_list": [ 00:12:35.029 { 00:12:35.029 "name": null, 00:12:35.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.029 "is_configured": false, 00:12:35.029 "data_offset": 0, 00:12:35.029 "data_size": 63488 00:12:35.029 }, 00:12:35.029 { 00:12:35.029 "name": null, 00:12:35.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.029 "is_configured": false, 00:12:35.029 "data_offset": 2048, 00:12:35.029 "data_size": 63488 00:12:35.029 }, 00:12:35.029 { 00:12:35.029 "name": "BaseBdev3", 00:12:35.029 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:35.029 "is_configured": true, 00:12:35.029 "data_offset": 2048, 00:12:35.029 "data_size": 63488 00:12:35.029 }, 00:12:35.029 { 00:12:35.029 "name": "BaseBdev4", 00:12:35.029 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:35.029 "is_configured": true, 00:12:35.029 "data_offset": 2048, 00:12:35.029 "data_size": 63488 00:12:35.029 } 00:12:35.029 ] 00:12:35.029 }' 00:12:35.029 01:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.029 01:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.599 "name": "raid_bdev1", 00:12:35.599 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:35.599 "strip_size_kb": 0, 00:12:35.599 "state": "online", 00:12:35.599 "raid_level": "raid1", 00:12:35.599 "superblock": true, 00:12:35.599 "num_base_bdevs": 4, 00:12:35.599 "num_base_bdevs_discovered": 2, 00:12:35.599 "num_base_bdevs_operational": 2, 00:12:35.599 "base_bdevs_list": [ 00:12:35.599 { 00:12:35.599 "name": null, 00:12:35.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.599 "is_configured": false, 00:12:35.599 "data_offset": 0, 00:12:35.599 "data_size": 63488 00:12:35.599 }, 00:12:35.599 { 00:12:35.599 "name": null, 00:12:35.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.599 "is_configured": false, 00:12:35.599 "data_offset": 2048, 00:12:35.599 "data_size": 63488 00:12:35.599 }, 00:12:35.599 { 00:12:35.599 "name": "BaseBdev3", 00:12:35.599 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:35.599 "is_configured": true, 00:12:35.599 "data_offset": 2048, 00:12:35.599 "data_size": 63488 00:12:35.599 }, 00:12:35.599 { 00:12:35.599 "name": "BaseBdev4", 00:12:35.599 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:35.599 "is_configured": true, 00:12:35.599 "data_offset": 2048, 00:12:35.599 "data_size": 63488 00:12:35.599 } 00:12:35.599 ] 00:12:35.599 }' 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.599 [2024-10-15 01:13:48.176707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.599 [2024-10-15 01:13:48.176906] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:35.599 [2024-10-15 01:13:48.176958] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:35.599 request: 00:12:35.599 { 00:12:35.599 "base_bdev": "BaseBdev1", 00:12:35.599 "raid_bdev": "raid_bdev1", 00:12:35.599 "method": "bdev_raid_add_base_bdev", 00:12:35.599 "req_id": 1 00:12:35.599 } 00:12:35.599 Got JSON-RPC error response 00:12:35.599 response: 00:12:35.599 { 00:12:35.599 "code": -22, 00:12:35.599 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:35.599 } 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:35.599 01:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:36.538 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:36.538 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.538 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.538 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.538 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.538 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:36.538 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.538 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.538 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.538 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.538 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.538 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.538 01:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.538 01:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.538 01:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.538 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.538 "name": "raid_bdev1", 00:12:36.538 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:36.538 "strip_size_kb": 0, 00:12:36.538 "state": "online", 00:12:36.538 "raid_level": "raid1", 00:12:36.538 "superblock": true, 00:12:36.538 "num_base_bdevs": 4, 00:12:36.538 "num_base_bdevs_discovered": 2, 00:12:36.538 "num_base_bdevs_operational": 2, 00:12:36.538 "base_bdevs_list": [ 00:12:36.538 { 00:12:36.538 "name": null, 00:12:36.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.538 "is_configured": false, 00:12:36.538 "data_offset": 0, 00:12:36.538 "data_size": 63488 00:12:36.538 }, 00:12:36.538 { 00:12:36.538 "name": null, 00:12:36.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.538 "is_configured": false, 00:12:36.538 "data_offset": 2048, 00:12:36.538 "data_size": 63488 00:12:36.538 }, 00:12:36.538 { 00:12:36.538 "name": "BaseBdev3", 00:12:36.538 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:36.538 "is_configured": true, 00:12:36.538 "data_offset": 2048, 00:12:36.538 "data_size": 63488 00:12:36.538 }, 00:12:36.538 { 00:12:36.538 "name": "BaseBdev4", 00:12:36.538 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:36.538 "is_configured": true, 00:12:36.538 "data_offset": 2048, 00:12:36.538 "data_size": 63488 00:12:36.538 } 00:12:36.538 ] 00:12:36.538 }' 00:12:36.538 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.538 01:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.109 "name": "raid_bdev1", 00:12:37.109 "uuid": "fb1001ac-e743-4328-aea1-6b854e566e7a", 00:12:37.109 "strip_size_kb": 0, 00:12:37.109 "state": "online", 00:12:37.109 "raid_level": "raid1", 00:12:37.109 "superblock": true, 00:12:37.109 "num_base_bdevs": 4, 00:12:37.109 "num_base_bdevs_discovered": 2, 00:12:37.109 "num_base_bdevs_operational": 2, 00:12:37.109 "base_bdevs_list": [ 00:12:37.109 { 00:12:37.109 "name": null, 00:12:37.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.109 "is_configured": false, 00:12:37.109 "data_offset": 0, 00:12:37.109 "data_size": 63488 00:12:37.109 }, 00:12:37.109 { 00:12:37.109 "name": null, 00:12:37.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.109 "is_configured": false, 00:12:37.109 "data_offset": 2048, 00:12:37.109 "data_size": 63488 00:12:37.109 }, 00:12:37.109 { 00:12:37.109 "name": "BaseBdev3", 00:12:37.109 "uuid": "fe9e80da-45b5-540e-923f-ee9c323fbb86", 00:12:37.109 "is_configured": true, 00:12:37.109 "data_offset": 2048, 00:12:37.109 "data_size": 63488 00:12:37.109 }, 00:12:37.109 { 00:12:37.109 "name": "BaseBdev4", 00:12:37.109 "uuid": "438fd9d5-02b4-57b0-83cf-bda34203042f", 00:12:37.109 "is_configured": true, 00:12:37.109 "data_offset": 2048, 00:12:37.109 "data_size": 63488 00:12:37.109 } 00:12:37.109 ] 00:12:37.109 }' 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88356 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 88356 ']' 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 88356 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88356 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88356' 00:12:37.109 killing process with pid 88356 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 88356 00:12:37.109 Received shutdown signal, test time was about 60.000000 seconds 00:12:37.109 00:12:37.109 Latency(us) 00:12:37.109 [2024-10-15T01:13:49.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.109 [2024-10-15T01:13:49.833Z] =================================================================================================================== 00:12:37.109 [2024-10-15T01:13:49.833Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:37.109 [2024-10-15 01:13:49.744639] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:37.109 01:13:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 88356 00:12:37.109 [2024-10-15 01:13:49.744768] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.109 [2024-10-15 01:13:49.744836] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:37.109 [2024-10-15 01:13:49.744848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:12:37.110 [2024-10-15 01:13:49.796845] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:37.370 ************************************ 00:12:37.370 END TEST raid_rebuild_test_sb 00:12:37.370 ************************************ 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:37.370 00:12:37.370 real 0m22.539s 00:12:37.370 user 0m27.692s 00:12:37.370 sys 0m3.376s 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.370 01:13:50 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:12:37.370 01:13:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:37.370 01:13:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:37.370 01:13:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:37.370 ************************************ 00:12:37.370 START TEST raid_rebuild_test_io 00:12:37.370 ************************************ 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:37.370 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:37.630 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:37.630 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:37.630 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:37.630 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:37.630 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:37.630 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:37.630 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:37.630 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:37.630 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:37.630 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:37.630 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89103 00:12:37.630 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89103 00:12:37.630 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:37.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.630 01:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 89103 ']' 00:12:37.630 01:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.630 01:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:37.630 01:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.630 01:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:37.630 01:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.630 [2024-10-15 01:13:50.175501] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:12:37.630 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:37.630 Zero copy mechanism will not be used. 00:12:37.630 [2024-10-15 01:13:50.175691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89103 ] 00:12:37.630 [2024-10-15 01:13:50.322771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.630 [2024-10-15 01:13:50.350686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.890 [2024-10-15 01:13:50.394329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.890 [2024-10-15 01:13:50.394364] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.460 01:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:38.460 01:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:12:38.460 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:38.460 01:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:38.460 01:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.460 01:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.460 BaseBdev1_malloc 00:12:38.460 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.460 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:38.460 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.460 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.460 [2024-10-15 01:13:51.020924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:38.460 [2024-10-15 01:13:51.020990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.460 [2024-10-15 01:13:51.021009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:38.460 [2024-10-15 01:13:51.021020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.460 [2024-10-15 01:13:51.023045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.460 [2024-10-15 01:13:51.023084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:38.460 BaseBdev1 00:12:38.460 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.460 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:38.460 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:38.460 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.460 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.460 BaseBdev2_malloc 00:12:38.460 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.460 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:38.460 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.460 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.460 [2024-10-15 01:13:51.049625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:38.460 [2024-10-15 01:13:51.049722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.460 [2024-10-15 01:13:51.049744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:38.460 [2024-10-15 01:13:51.049752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.460 [2024-10-15 01:13:51.051798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.460 [2024-10-15 01:13:51.051838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:38.460 BaseBdev2 00:12:38.460 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.460 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:38.460 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:38.460 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.460 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.461 BaseBdev3_malloc 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.461 [2024-10-15 01:13:51.078419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:38.461 [2024-10-15 01:13:51.078493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.461 [2024-10-15 01:13:51.078514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:38.461 [2024-10-15 01:13:51.078522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.461 [2024-10-15 01:13:51.080539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.461 [2024-10-15 01:13:51.080574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:38.461 BaseBdev3 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.461 BaseBdev4_malloc 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.461 [2024-10-15 01:13:51.123886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:38.461 [2024-10-15 01:13:51.123981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.461 [2024-10-15 01:13:51.124030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:38.461 [2024-10-15 01:13:51.124051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.461 [2024-10-15 01:13:51.128037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.461 [2024-10-15 01:13:51.128086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:38.461 BaseBdev4 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.461 spare_malloc 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.461 spare_delay 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.461 [2024-10-15 01:13:51.165853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:38.461 [2024-10-15 01:13:51.165896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.461 [2024-10-15 01:13:51.165930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:38.461 [2024-10-15 01:13:51.165938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.461 [2024-10-15 01:13:51.167937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.461 [2024-10-15 01:13:51.167971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:38.461 spare 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.461 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.461 [2024-10-15 01:13:51.177891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.461 [2024-10-15 01:13:51.179689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:38.461 [2024-10-15 01:13:51.179777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:38.461 [2024-10-15 01:13:51.179827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:38.461 [2024-10-15 01:13:51.179905] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:38.461 [2024-10-15 01:13:51.179920] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:38.461 [2024-10-15 01:13:51.180169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:38.461 [2024-10-15 01:13:51.180318] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:38.461 [2024-10-15 01:13:51.180335] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:38.461 [2024-10-15 01:13:51.180449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.721 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.721 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:38.721 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.721 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.721 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.721 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.721 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.721 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.721 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.721 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.721 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.721 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.721 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.721 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.721 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.721 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.721 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.721 "name": "raid_bdev1", 00:12:38.721 "uuid": "1f45a98d-fe43-4c2b-957f-1ea95708effd", 00:12:38.721 "strip_size_kb": 0, 00:12:38.721 "state": "online", 00:12:38.721 "raid_level": "raid1", 00:12:38.722 "superblock": false, 00:12:38.722 "num_base_bdevs": 4, 00:12:38.722 "num_base_bdevs_discovered": 4, 00:12:38.722 "num_base_bdevs_operational": 4, 00:12:38.722 "base_bdevs_list": [ 00:12:38.722 { 00:12:38.722 "name": "BaseBdev1", 00:12:38.722 "uuid": "310efa52-f781-50f0-8a7f-e5cc96afbdce", 00:12:38.722 "is_configured": true, 00:12:38.722 "data_offset": 0, 00:12:38.722 "data_size": 65536 00:12:38.722 }, 00:12:38.722 { 00:12:38.722 "name": "BaseBdev2", 00:12:38.722 "uuid": "7c0e0b3f-47cd-552c-95d3-a4b19921c71f", 00:12:38.722 "is_configured": true, 00:12:38.722 "data_offset": 0, 00:12:38.722 "data_size": 65536 00:12:38.722 }, 00:12:38.722 { 00:12:38.722 "name": "BaseBdev3", 00:12:38.722 "uuid": "004cb1d7-a882-5fd1-a3af-005508eebeca", 00:12:38.722 "is_configured": true, 00:12:38.722 "data_offset": 0, 00:12:38.722 "data_size": 65536 00:12:38.722 }, 00:12:38.722 { 00:12:38.722 "name": "BaseBdev4", 00:12:38.722 "uuid": "864a6d42-a753-555c-8397-19593242e870", 00:12:38.722 "is_configured": true, 00:12:38.722 "data_offset": 0, 00:12:38.722 "data_size": 65536 00:12:38.722 } 00:12:38.722 ] 00:12:38.722 }' 00:12:38.722 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.722 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.982 [2024-10-15 01:13:51.577497] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.982 [2024-10-15 01:13:51.649036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.982 "name": "raid_bdev1", 00:12:38.982 "uuid": "1f45a98d-fe43-4c2b-957f-1ea95708effd", 00:12:38.982 "strip_size_kb": 0, 00:12:38.982 "state": "online", 00:12:38.982 "raid_level": "raid1", 00:12:38.982 "superblock": false, 00:12:38.982 "num_base_bdevs": 4, 00:12:38.982 "num_base_bdevs_discovered": 3, 00:12:38.982 "num_base_bdevs_operational": 3, 00:12:38.982 "base_bdevs_list": [ 00:12:38.982 { 00:12:38.982 "name": null, 00:12:38.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.982 "is_configured": false, 00:12:38.982 "data_offset": 0, 00:12:38.982 "data_size": 65536 00:12:38.982 }, 00:12:38.982 { 00:12:38.982 "name": "BaseBdev2", 00:12:38.982 "uuid": "7c0e0b3f-47cd-552c-95d3-a4b19921c71f", 00:12:38.982 "is_configured": true, 00:12:38.982 "data_offset": 0, 00:12:38.982 "data_size": 65536 00:12:38.982 }, 00:12:38.982 { 00:12:38.982 "name": "BaseBdev3", 00:12:38.982 "uuid": "004cb1d7-a882-5fd1-a3af-005508eebeca", 00:12:38.982 "is_configured": true, 00:12:38.982 "data_offset": 0, 00:12:38.982 "data_size": 65536 00:12:38.982 }, 00:12:38.982 { 00:12:38.982 "name": "BaseBdev4", 00:12:38.982 "uuid": "864a6d42-a753-555c-8397-19593242e870", 00:12:38.982 "is_configured": true, 00:12:38.982 "data_offset": 0, 00:12:38.982 "data_size": 65536 00:12:38.982 } 00:12:38.982 ] 00:12:38.982 }' 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.982 01:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.242 [2024-10-15 01:13:51.750946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:39.242 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:39.242 Zero copy mechanism will not be used. 00:12:39.242 Running I/O for 60 seconds... 00:12:39.502 01:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:39.502 01:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.502 01:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.502 [2024-10-15 01:13:52.122618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:39.502 01:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.502 01:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:39.502 [2024-10-15 01:13:52.165342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:12:39.502 [2024-10-15 01:13:52.167315] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:39.762 [2024-10-15 01:13:52.281827] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:39.762 [2024-10-15 01:13:52.282369] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:40.022 [2024-10-15 01:13:52.492464] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:40.022 [2024-10-15 01:13:52.493173] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:40.282 151.00 IOPS, 453.00 MiB/s [2024-10-15T01:13:53.006Z] [2024-10-15 01:13:52.943179] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:40.282 [2024-10-15 01:13:52.943969] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:40.542 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.542 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.542 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.542 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.542 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.542 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.542 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.542 01:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.542 01:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.542 01:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.542 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.542 "name": "raid_bdev1", 00:12:40.542 "uuid": "1f45a98d-fe43-4c2b-957f-1ea95708effd", 00:12:40.542 "strip_size_kb": 0, 00:12:40.542 "state": "online", 00:12:40.542 "raid_level": "raid1", 00:12:40.542 "superblock": false, 00:12:40.542 "num_base_bdevs": 4, 00:12:40.542 "num_base_bdevs_discovered": 4, 00:12:40.542 "num_base_bdevs_operational": 4, 00:12:40.542 "process": { 00:12:40.542 "type": "rebuild", 00:12:40.542 "target": "spare", 00:12:40.542 "progress": { 00:12:40.542 "blocks": 12288, 00:12:40.542 "percent": 18 00:12:40.542 } 00:12:40.542 }, 00:12:40.542 "base_bdevs_list": [ 00:12:40.542 { 00:12:40.542 "name": "spare", 00:12:40.542 "uuid": "6c815014-c1e4-5d80-913c-a227463933d9", 00:12:40.542 "is_configured": true, 00:12:40.542 "data_offset": 0, 00:12:40.542 "data_size": 65536 00:12:40.542 }, 00:12:40.542 { 00:12:40.542 "name": "BaseBdev2", 00:12:40.542 "uuid": "7c0e0b3f-47cd-552c-95d3-a4b19921c71f", 00:12:40.542 "is_configured": true, 00:12:40.542 "data_offset": 0, 00:12:40.542 "data_size": 65536 00:12:40.542 }, 00:12:40.542 { 00:12:40.542 "name": "BaseBdev3", 00:12:40.542 "uuid": "004cb1d7-a882-5fd1-a3af-005508eebeca", 00:12:40.542 "is_configured": true, 00:12:40.542 "data_offset": 0, 00:12:40.542 "data_size": 65536 00:12:40.542 }, 00:12:40.542 { 00:12:40.542 "name": "BaseBdev4", 00:12:40.542 "uuid": "864a6d42-a753-555c-8397-19593242e870", 00:12:40.542 "is_configured": true, 00:12:40.542 "data_offset": 0, 00:12:40.542 "data_size": 65536 00:12:40.542 } 00:12:40.542 ] 00:12:40.542 }' 00:12:40.542 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.542 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.542 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.803 [2024-10-15 01:13:53.291615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.803 [2024-10-15 01:13:53.299809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.803 [2024-10-15 01:13:53.404771] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:40.803 [2024-10-15 01:13:53.407947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.803 [2024-10-15 01:13:53.407982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.803 [2024-10-15 01:13:53.407995] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:40.803 [2024-10-15 01:13:53.432127] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.803 "name": "raid_bdev1", 00:12:40.803 "uuid": "1f45a98d-fe43-4c2b-957f-1ea95708effd", 00:12:40.803 "strip_size_kb": 0, 00:12:40.803 "state": "online", 00:12:40.803 "raid_level": "raid1", 00:12:40.803 "superblock": false, 00:12:40.803 "num_base_bdevs": 4, 00:12:40.803 "num_base_bdevs_discovered": 3, 00:12:40.803 "num_base_bdevs_operational": 3, 00:12:40.803 "base_bdevs_list": [ 00:12:40.803 { 00:12:40.803 "name": null, 00:12:40.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.803 "is_configured": false, 00:12:40.803 "data_offset": 0, 00:12:40.803 "data_size": 65536 00:12:40.803 }, 00:12:40.803 { 00:12:40.803 "name": "BaseBdev2", 00:12:40.803 "uuid": "7c0e0b3f-47cd-552c-95d3-a4b19921c71f", 00:12:40.803 "is_configured": true, 00:12:40.803 "data_offset": 0, 00:12:40.803 "data_size": 65536 00:12:40.803 }, 00:12:40.803 { 00:12:40.803 "name": "BaseBdev3", 00:12:40.803 "uuid": "004cb1d7-a882-5fd1-a3af-005508eebeca", 00:12:40.803 "is_configured": true, 00:12:40.803 "data_offset": 0, 00:12:40.803 "data_size": 65536 00:12:40.803 }, 00:12:40.803 { 00:12:40.803 "name": "BaseBdev4", 00:12:40.803 "uuid": "864a6d42-a753-555c-8397-19593242e870", 00:12:40.803 "is_configured": true, 00:12:40.803 "data_offset": 0, 00:12:40.803 "data_size": 65536 00:12:40.803 } 00:12:40.803 ] 00:12:40.803 }' 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.803 01:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.323 138.50 IOPS, 415.50 MiB/s [2024-10-15T01:13:54.047Z] 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:41.323 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.323 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:41.323 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:41.323 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.323 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.323 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.323 01:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.323 01:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.323 01:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.323 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.323 "name": "raid_bdev1", 00:12:41.323 "uuid": "1f45a98d-fe43-4c2b-957f-1ea95708effd", 00:12:41.323 "strip_size_kb": 0, 00:12:41.323 "state": "online", 00:12:41.323 "raid_level": "raid1", 00:12:41.323 "superblock": false, 00:12:41.323 "num_base_bdevs": 4, 00:12:41.323 "num_base_bdevs_discovered": 3, 00:12:41.323 "num_base_bdevs_operational": 3, 00:12:41.323 "base_bdevs_list": [ 00:12:41.323 { 00:12:41.323 "name": null, 00:12:41.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.323 "is_configured": false, 00:12:41.323 "data_offset": 0, 00:12:41.323 "data_size": 65536 00:12:41.323 }, 00:12:41.323 { 00:12:41.323 "name": "BaseBdev2", 00:12:41.323 "uuid": "7c0e0b3f-47cd-552c-95d3-a4b19921c71f", 00:12:41.323 "is_configured": true, 00:12:41.323 "data_offset": 0, 00:12:41.323 "data_size": 65536 00:12:41.323 }, 00:12:41.323 { 00:12:41.323 "name": "BaseBdev3", 00:12:41.323 "uuid": "004cb1d7-a882-5fd1-a3af-005508eebeca", 00:12:41.323 "is_configured": true, 00:12:41.323 "data_offset": 0, 00:12:41.323 "data_size": 65536 00:12:41.323 }, 00:12:41.323 { 00:12:41.323 "name": "BaseBdev4", 00:12:41.323 "uuid": "864a6d42-a753-555c-8397-19593242e870", 00:12:41.323 "is_configured": true, 00:12:41.323 "data_offset": 0, 00:12:41.323 "data_size": 65536 00:12:41.323 } 00:12:41.323 ] 00:12:41.323 }' 00:12:41.323 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.323 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:41.323 01:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.323 01:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:41.323 01:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:41.323 01:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.323 01:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.323 [2024-10-15 01:13:54.021727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:41.583 01:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.583 01:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:41.583 [2024-10-15 01:13:54.083037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:12:41.583 [2024-10-15 01:13:54.085115] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:41.583 [2024-10-15 01:13:54.212962] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:41.843 [2024-10-15 01:13:54.430209] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:41.843 [2024-10-15 01:13:54.430792] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:42.364 149.67 IOPS, 449.00 MiB/s [2024-10-15T01:13:55.088Z] [2024-10-15 01:13:54.881872] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:42.364 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.364 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.364 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.364 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.364 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.364 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.364 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.364 01:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.364 01:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.364 01:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.624 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.624 "name": "raid_bdev1", 00:12:42.624 "uuid": "1f45a98d-fe43-4c2b-957f-1ea95708effd", 00:12:42.624 "strip_size_kb": 0, 00:12:42.624 "state": "online", 00:12:42.624 "raid_level": "raid1", 00:12:42.624 "superblock": false, 00:12:42.624 "num_base_bdevs": 4, 00:12:42.624 "num_base_bdevs_discovered": 4, 00:12:42.624 "num_base_bdevs_operational": 4, 00:12:42.624 "process": { 00:12:42.624 "type": "rebuild", 00:12:42.624 "target": "spare", 00:12:42.624 "progress": { 00:12:42.624 "blocks": 12288, 00:12:42.624 "percent": 18 00:12:42.624 } 00:12:42.624 }, 00:12:42.624 "base_bdevs_list": [ 00:12:42.624 { 00:12:42.624 "name": "spare", 00:12:42.624 "uuid": "6c815014-c1e4-5d80-913c-a227463933d9", 00:12:42.624 "is_configured": true, 00:12:42.624 "data_offset": 0, 00:12:42.624 "data_size": 65536 00:12:42.624 }, 00:12:42.624 { 00:12:42.624 "name": "BaseBdev2", 00:12:42.624 "uuid": "7c0e0b3f-47cd-552c-95d3-a4b19921c71f", 00:12:42.624 "is_configured": true, 00:12:42.624 "data_offset": 0, 00:12:42.624 "data_size": 65536 00:12:42.624 }, 00:12:42.624 { 00:12:42.624 "name": "BaseBdev3", 00:12:42.624 "uuid": "004cb1d7-a882-5fd1-a3af-005508eebeca", 00:12:42.624 "is_configured": true, 00:12:42.624 "data_offset": 0, 00:12:42.624 "data_size": 65536 00:12:42.624 }, 00:12:42.624 { 00:12:42.624 "name": "BaseBdev4", 00:12:42.624 "uuid": "864a6d42-a753-555c-8397-19593242e870", 00:12:42.624 "is_configured": true, 00:12:42.624 "data_offset": 0, 00:12:42.624 "data_size": 65536 00:12:42.624 } 00:12:42.624 ] 00:12:42.624 }' 00:12:42.624 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.624 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.624 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.624 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.624 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:42.624 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:42.624 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:42.624 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:42.624 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:42.624 01:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.624 01:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.624 [2024-10-15 01:13:55.215106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:42.624 [2024-10-15 01:13:55.230985] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:42.624 [2024-10-15 01:13:55.231575] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:42.624 [2024-10-15 01:13:55.339105] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:12:42.624 [2024-10-15 01:13:55.339133] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:12:42.624 [2024-10-15 01:13:55.339200] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.885 "name": "raid_bdev1", 00:12:42.885 "uuid": "1f45a98d-fe43-4c2b-957f-1ea95708effd", 00:12:42.885 "strip_size_kb": 0, 00:12:42.885 "state": "online", 00:12:42.885 "raid_level": "raid1", 00:12:42.885 "superblock": false, 00:12:42.885 "num_base_bdevs": 4, 00:12:42.885 "num_base_bdevs_discovered": 3, 00:12:42.885 "num_base_bdevs_operational": 3, 00:12:42.885 "process": { 00:12:42.885 "type": "rebuild", 00:12:42.885 "target": "spare", 00:12:42.885 "progress": { 00:12:42.885 "blocks": 16384, 00:12:42.885 "percent": 25 00:12:42.885 } 00:12:42.885 }, 00:12:42.885 "base_bdevs_list": [ 00:12:42.885 { 00:12:42.885 "name": "spare", 00:12:42.885 "uuid": "6c815014-c1e4-5d80-913c-a227463933d9", 00:12:42.885 "is_configured": true, 00:12:42.885 "data_offset": 0, 00:12:42.885 "data_size": 65536 00:12:42.885 }, 00:12:42.885 { 00:12:42.885 "name": null, 00:12:42.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.885 "is_configured": false, 00:12:42.885 "data_offset": 0, 00:12:42.885 "data_size": 65536 00:12:42.885 }, 00:12:42.885 { 00:12:42.885 "name": "BaseBdev3", 00:12:42.885 "uuid": "004cb1d7-a882-5fd1-a3af-005508eebeca", 00:12:42.885 "is_configured": true, 00:12:42.885 "data_offset": 0, 00:12:42.885 "data_size": 65536 00:12:42.885 }, 00:12:42.885 { 00:12:42.885 "name": "BaseBdev4", 00:12:42.885 "uuid": "864a6d42-a753-555c-8397-19593242e870", 00:12:42.885 "is_configured": true, 00:12:42.885 "data_offset": 0, 00:12:42.885 "data_size": 65536 00:12:42.885 } 00:12:42.885 ] 00:12:42.885 }' 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=387 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.885 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.885 "name": "raid_bdev1", 00:12:42.885 "uuid": "1f45a98d-fe43-4c2b-957f-1ea95708effd", 00:12:42.885 "strip_size_kb": 0, 00:12:42.885 "state": "online", 00:12:42.885 "raid_level": "raid1", 00:12:42.885 "superblock": false, 00:12:42.885 "num_base_bdevs": 4, 00:12:42.885 "num_base_bdevs_discovered": 3, 00:12:42.885 "num_base_bdevs_operational": 3, 00:12:42.885 "process": { 00:12:42.885 "type": "rebuild", 00:12:42.885 "target": "spare", 00:12:42.885 "progress": { 00:12:42.886 "blocks": 16384, 00:12:42.886 "percent": 25 00:12:42.886 } 00:12:42.886 }, 00:12:42.886 "base_bdevs_list": [ 00:12:42.886 { 00:12:42.886 "name": "spare", 00:12:42.886 "uuid": "6c815014-c1e4-5d80-913c-a227463933d9", 00:12:42.886 "is_configured": true, 00:12:42.886 "data_offset": 0, 00:12:42.886 "data_size": 65536 00:12:42.886 }, 00:12:42.886 { 00:12:42.886 "name": null, 00:12:42.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.886 "is_configured": false, 00:12:42.886 "data_offset": 0, 00:12:42.886 "data_size": 65536 00:12:42.886 }, 00:12:42.886 { 00:12:42.886 "name": "BaseBdev3", 00:12:42.886 "uuid": "004cb1d7-a882-5fd1-a3af-005508eebeca", 00:12:42.886 "is_configured": true, 00:12:42.886 "data_offset": 0, 00:12:42.886 "data_size": 65536 00:12:42.886 }, 00:12:42.886 { 00:12:42.886 "name": "BaseBdev4", 00:12:42.886 "uuid": "864a6d42-a753-555c-8397-19593242e870", 00:12:42.886 "is_configured": true, 00:12:42.886 "data_offset": 0, 00:12:42.886 "data_size": 65536 00:12:42.886 } 00:12:42.886 ] 00:12:42.886 }' 00:12:42.886 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.886 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.886 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.162 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:43.162 01:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:43.162 [2024-10-15 01:13:55.655286] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:43.163 [2024-10-15 01:13:55.655820] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:43.163 127.50 IOPS, 382.50 MiB/s [2024-10-15T01:13:55.887Z] [2024-10-15 01:13:55.782081] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:43.163 [2024-10-15 01:13:55.782525] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:43.443 [2024-10-15 01:13:56.119047] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:43.703 [2024-10-15 01:13:56.247054] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:43.963 [2024-10-15 01:13:56.568268] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:43.963 01:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:43.963 01:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:43.963 01:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.963 01:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:43.963 01:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:43.963 01:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.963 01:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.963 01:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.964 01:13:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.964 01:13:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.964 01:13:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.224 01:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.224 "name": "raid_bdev1", 00:12:44.224 "uuid": "1f45a98d-fe43-4c2b-957f-1ea95708effd", 00:12:44.224 "strip_size_kb": 0, 00:12:44.224 "state": "online", 00:12:44.224 "raid_level": "raid1", 00:12:44.224 "superblock": false, 00:12:44.224 "num_base_bdevs": 4, 00:12:44.224 "num_base_bdevs_discovered": 3, 00:12:44.224 "num_base_bdevs_operational": 3, 00:12:44.224 "process": { 00:12:44.224 "type": "rebuild", 00:12:44.224 "target": "spare", 00:12:44.224 "progress": { 00:12:44.224 "blocks": 32768, 00:12:44.224 "percent": 50 00:12:44.224 } 00:12:44.224 }, 00:12:44.224 "base_bdevs_list": [ 00:12:44.224 { 00:12:44.224 "name": "spare", 00:12:44.224 "uuid": "6c815014-c1e4-5d80-913c-a227463933d9", 00:12:44.224 "is_configured": true, 00:12:44.224 "data_offset": 0, 00:12:44.224 "data_size": 65536 00:12:44.224 }, 00:12:44.224 { 00:12:44.224 "name": null, 00:12:44.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.224 "is_configured": false, 00:12:44.224 "data_offset": 0, 00:12:44.224 "data_size": 65536 00:12:44.224 }, 00:12:44.224 { 00:12:44.224 "name": "BaseBdev3", 00:12:44.224 "uuid": "004cb1d7-a882-5fd1-a3af-005508eebeca", 00:12:44.224 "is_configured": true, 00:12:44.224 "data_offset": 0, 00:12:44.224 "data_size": 65536 00:12:44.224 }, 00:12:44.224 { 00:12:44.224 "name": "BaseBdev4", 00:12:44.224 "uuid": "864a6d42-a753-555c-8397-19593242e870", 00:12:44.224 "is_configured": true, 00:12:44.224 "data_offset": 0, 00:12:44.224 "data_size": 65536 00:12:44.224 } 00:12:44.224 ] 00:12:44.224 }' 00:12:44.224 01:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.224 01:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:44.224 01:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.224 112.00 IOPS, 336.00 MiB/s [2024-10-15T01:13:56.948Z] 01:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:44.224 01:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:44.483 [2024-10-15 01:13:57.030711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:44.743 [2024-10-15 01:13:57.466209] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:45.313 98.50 IOPS, 295.50 MiB/s [2024-10-15T01:13:58.037Z] 01:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:45.313 01:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.313 01:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.313 01:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.313 01:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.313 01:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.313 01:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.313 01:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.313 01:13:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.313 01:13:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.313 01:13:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.313 01:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.313 "name": "raid_bdev1", 00:12:45.313 "uuid": "1f45a98d-fe43-4c2b-957f-1ea95708effd", 00:12:45.313 "strip_size_kb": 0, 00:12:45.313 "state": "online", 00:12:45.313 "raid_level": "raid1", 00:12:45.313 "superblock": false, 00:12:45.313 "num_base_bdevs": 4, 00:12:45.313 "num_base_bdevs_discovered": 3, 00:12:45.313 "num_base_bdevs_operational": 3, 00:12:45.313 "process": { 00:12:45.313 "type": "rebuild", 00:12:45.313 "target": "spare", 00:12:45.313 "progress": { 00:12:45.313 "blocks": 51200, 00:12:45.313 "percent": 78 00:12:45.313 } 00:12:45.313 }, 00:12:45.313 "base_bdevs_list": [ 00:12:45.313 { 00:12:45.313 "name": "spare", 00:12:45.313 "uuid": "6c815014-c1e4-5d80-913c-a227463933d9", 00:12:45.313 "is_configured": true, 00:12:45.313 "data_offset": 0, 00:12:45.313 "data_size": 65536 00:12:45.313 }, 00:12:45.313 { 00:12:45.313 "name": null, 00:12:45.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.313 "is_configured": false, 00:12:45.313 "data_offset": 0, 00:12:45.313 "data_size": 65536 00:12:45.313 }, 00:12:45.313 { 00:12:45.313 "name": "BaseBdev3", 00:12:45.313 "uuid": "004cb1d7-a882-5fd1-a3af-005508eebeca", 00:12:45.313 "is_configured": true, 00:12:45.313 "data_offset": 0, 00:12:45.313 "data_size": 65536 00:12:45.313 }, 00:12:45.313 { 00:12:45.313 "name": "BaseBdev4", 00:12:45.313 "uuid": "864a6d42-a753-555c-8397-19593242e870", 00:12:45.313 "is_configured": true, 00:12:45.313 "data_offset": 0, 00:12:45.313 "data_size": 65536 00:12:45.313 } 00:12:45.313 ] 00:12:45.313 }' 00:12:45.313 01:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.313 01:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:45.313 01:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.313 01:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:45.313 01:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:45.882 [2024-10-15 01:13:58.460904] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:45.882 [2024-10-15 01:13:58.560772] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:45.882 [2024-10-15 01:13:58.568735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.402 89.57 IOPS, 268.71 MiB/s [2024-10-15T01:13:59.126Z] 01:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:46.402 01:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.402 01:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.402 01:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.402 01:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.402 01:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.402 01:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.402 01:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.402 01:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.402 01:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.402 01:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.402 01:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.402 "name": "raid_bdev1", 00:12:46.402 "uuid": "1f45a98d-fe43-4c2b-957f-1ea95708effd", 00:12:46.402 "strip_size_kb": 0, 00:12:46.402 "state": "online", 00:12:46.402 "raid_level": "raid1", 00:12:46.402 "superblock": false, 00:12:46.402 "num_base_bdevs": 4, 00:12:46.402 "num_base_bdevs_discovered": 3, 00:12:46.402 "num_base_bdevs_operational": 3, 00:12:46.402 "base_bdevs_list": [ 00:12:46.402 { 00:12:46.402 "name": "spare", 00:12:46.402 "uuid": "6c815014-c1e4-5d80-913c-a227463933d9", 00:12:46.402 "is_configured": true, 00:12:46.402 "data_offset": 0, 00:12:46.402 "data_size": 65536 00:12:46.402 }, 00:12:46.402 { 00:12:46.402 "name": null, 00:12:46.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.402 "is_configured": false, 00:12:46.402 "data_offset": 0, 00:12:46.402 "data_size": 65536 00:12:46.402 }, 00:12:46.402 { 00:12:46.402 "name": "BaseBdev3", 00:12:46.402 "uuid": "004cb1d7-a882-5fd1-a3af-005508eebeca", 00:12:46.402 "is_configured": true, 00:12:46.402 "data_offset": 0, 00:12:46.402 "data_size": 65536 00:12:46.402 }, 00:12:46.402 { 00:12:46.402 "name": "BaseBdev4", 00:12:46.402 "uuid": "864a6d42-a753-555c-8397-19593242e870", 00:12:46.402 "is_configured": true, 00:12:46.402 "data_offset": 0, 00:12:46.402 "data_size": 65536 00:12:46.402 } 00:12:46.402 ] 00:12:46.402 }' 00:12:46.402 01:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.402 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:46.402 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.402 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:46.402 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:46.402 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:46.403 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.403 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:46.403 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:46.403 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.403 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.403 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.403 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.403 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.403 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.403 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.403 "name": "raid_bdev1", 00:12:46.403 "uuid": "1f45a98d-fe43-4c2b-957f-1ea95708effd", 00:12:46.403 "strip_size_kb": 0, 00:12:46.403 "state": "online", 00:12:46.403 "raid_level": "raid1", 00:12:46.403 "superblock": false, 00:12:46.403 "num_base_bdevs": 4, 00:12:46.403 "num_base_bdevs_discovered": 3, 00:12:46.403 "num_base_bdevs_operational": 3, 00:12:46.403 "base_bdevs_list": [ 00:12:46.403 { 00:12:46.403 "name": "spare", 00:12:46.403 "uuid": "6c815014-c1e4-5d80-913c-a227463933d9", 00:12:46.403 "is_configured": true, 00:12:46.403 "data_offset": 0, 00:12:46.403 "data_size": 65536 00:12:46.403 }, 00:12:46.403 { 00:12:46.403 "name": null, 00:12:46.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.403 "is_configured": false, 00:12:46.403 "data_offset": 0, 00:12:46.403 "data_size": 65536 00:12:46.403 }, 00:12:46.403 { 00:12:46.403 "name": "BaseBdev3", 00:12:46.403 "uuid": "004cb1d7-a882-5fd1-a3af-005508eebeca", 00:12:46.403 "is_configured": true, 00:12:46.403 "data_offset": 0, 00:12:46.403 "data_size": 65536 00:12:46.403 }, 00:12:46.403 { 00:12:46.403 "name": "BaseBdev4", 00:12:46.403 "uuid": "864a6d42-a753-555c-8397-19593242e870", 00:12:46.403 "is_configured": true, 00:12:46.403 "data_offset": 0, 00:12:46.403 "data_size": 65536 00:12:46.403 } 00:12:46.403 ] 00:12:46.403 }' 00:12:46.403 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.663 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:46.663 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.663 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:46.663 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:46.663 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.663 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.663 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.663 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.663 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.663 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.663 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.663 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.663 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.663 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.663 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.663 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.663 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.663 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.663 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.663 "name": "raid_bdev1", 00:12:46.663 "uuid": "1f45a98d-fe43-4c2b-957f-1ea95708effd", 00:12:46.663 "strip_size_kb": 0, 00:12:46.663 "state": "online", 00:12:46.663 "raid_level": "raid1", 00:12:46.663 "superblock": false, 00:12:46.663 "num_base_bdevs": 4, 00:12:46.663 "num_base_bdevs_discovered": 3, 00:12:46.663 "num_base_bdevs_operational": 3, 00:12:46.663 "base_bdevs_list": [ 00:12:46.663 { 00:12:46.663 "name": "spare", 00:12:46.663 "uuid": "6c815014-c1e4-5d80-913c-a227463933d9", 00:12:46.663 "is_configured": true, 00:12:46.663 "data_offset": 0, 00:12:46.663 "data_size": 65536 00:12:46.663 }, 00:12:46.663 { 00:12:46.663 "name": null, 00:12:46.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.663 "is_configured": false, 00:12:46.663 "data_offset": 0, 00:12:46.663 "data_size": 65536 00:12:46.663 }, 00:12:46.663 { 00:12:46.663 "name": "BaseBdev3", 00:12:46.663 "uuid": "004cb1d7-a882-5fd1-a3af-005508eebeca", 00:12:46.663 "is_configured": true, 00:12:46.663 "data_offset": 0, 00:12:46.663 "data_size": 65536 00:12:46.663 }, 00:12:46.663 { 00:12:46.663 "name": "BaseBdev4", 00:12:46.663 "uuid": "864a6d42-a753-555c-8397-19593242e870", 00:12:46.663 "is_configured": true, 00:12:46.663 "data_offset": 0, 00:12:46.663 "data_size": 65536 00:12:46.663 } 00:12:46.663 ] 00:12:46.663 }' 00:12:46.663 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.663 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.923 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:46.923 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.923 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.923 [2024-10-15 01:13:59.617330] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:46.923 [2024-10-15 01:13:59.617418] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:47.183 00:12:47.183 Latency(us) 00:12:47.183 [2024-10-15T01:13:59.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.183 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:47.183 raid_bdev1 : 7.91 83.43 250.28 0.00 0.00 16319.65 282.61 113557.58 00:12:47.183 [2024-10-15T01:13:59.907Z] =================================================================================================================== 00:12:47.183 [2024-10-15T01:13:59.907Z] Total : 83.43 250.28 0.00 0.00 16319.65 282.61 113557.58 00:12:47.183 [2024-10-15 01:13:59.652666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.183 [2024-10-15 01:13:59.652745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:47.183 [2024-10-15 01:13:59.652863] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:47.183 [2024-10-15 01:13:59.652915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:47.183 { 00:12:47.183 "results": [ 00:12:47.183 { 00:12:47.183 "job": "raid_bdev1", 00:12:47.183 "core_mask": "0x1", 00:12:47.183 "workload": "randrw", 00:12:47.183 "percentage": 50, 00:12:47.183 "status": "finished", 00:12:47.183 "queue_depth": 2, 00:12:47.183 "io_size": 3145728, 00:12:47.183 "runtime": 7.911293, 00:12:47.183 "iops": 83.42504821904586, 00:12:47.183 "mibps": 250.2751446571376, 00:12:47.183 "io_failed": 0, 00:12:47.183 "io_timeout": 0, 00:12:47.183 "avg_latency_us": 16319.652094746592, 00:12:47.183 "min_latency_us": 282.6061135371179, 00:12:47.183 "max_latency_us": 113557.57554585153 00:12:47.183 } 00:12:47.183 ], 00:12:47.183 "core_count": 1 00:12:47.183 } 00:12:47.183 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.183 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.183 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:47.183 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.183 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.183 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.183 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:47.183 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:47.183 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:47.183 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:47.183 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:47.183 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:47.183 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:47.183 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:47.183 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:47.183 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:47.183 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:47.183 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:47.183 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:47.184 /dev/nbd0 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.444 1+0 records in 00:12:47.444 1+0 records out 00:12:47.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232688 s, 17.6 MB/s 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:47.444 01:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:47.444 /dev/nbd1 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.704 1+0 records in 00:12:47.704 1+0 records out 00:12:47.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000537239 s, 7.6 MB/s 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.704 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:47.964 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:47.964 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:47.964 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:47.964 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.964 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.964 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:47.964 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:47.964 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.964 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:47.964 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:47.964 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:47.964 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:47.965 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:47.965 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:47.965 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:47.965 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:47.965 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:47.965 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:47.965 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:47.965 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:47.965 /dev/nbd1 00:12:48.224 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:48.224 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:48.224 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:48.224 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:48.224 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:48.224 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:48.224 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:48.224 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:48.224 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:48.224 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:48.224 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.224 1+0 records in 00:12:48.224 1+0 records out 00:12:48.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512703 s, 8.0 MB/s 00:12:48.224 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.224 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:48.224 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.224 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:48.225 01:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:48.225 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.225 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:48.225 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:48.225 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:48.225 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:48.225 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:48.225 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:48.225 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:48.225 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.225 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:48.485 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:48.485 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:48.485 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:48.485 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.485 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.485 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:48.485 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:48.485 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.485 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:48.485 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:48.485 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:48.485 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:48.485 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:48.485 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.485 01:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:48.485 01:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:48.485 01:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:48.485 01:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:48.485 01:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.485 01:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.485 01:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:48.485 01:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:48.485 01:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.485 01:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:48.485 01:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89103 00:12:48.485 01:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 89103 ']' 00:12:48.485 01:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 89103 00:12:48.485 01:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:12:48.485 01:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:48.745 01:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89103 00:12:48.745 01:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:48.745 01:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:48.745 01:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89103' 00:12:48.745 killing process with pid 89103 00:12:48.745 01:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 89103 00:12:48.745 Received shutdown signal, test time was about 9.509315 seconds 00:12:48.745 00:12:48.745 Latency(us) 00:12:48.745 [2024-10-15T01:14:01.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.745 [2024-10-15T01:14:01.469Z] =================================================================================================================== 00:12:48.745 [2024-10-15T01:14:01.469Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:48.745 [2024-10-15 01:14:01.244201] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:48.745 01:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 89103 00:12:48.745 [2024-10-15 01:14:01.288774] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:49.005 01:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:49.005 00:12:49.005 real 0m11.405s 00:12:49.005 user 0m14.847s 00:12:49.005 sys 0m1.648s 00:12:49.005 01:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:49.005 01:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.005 ************************************ 00:12:49.005 END TEST raid_rebuild_test_io 00:12:49.005 ************************************ 00:12:49.005 01:14:01 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:12:49.005 01:14:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:49.005 01:14:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:49.005 01:14:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:49.005 ************************************ 00:12:49.005 START TEST raid_rebuild_test_sb_io 00:12:49.005 ************************************ 00:12:49.005 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:12:49.005 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:49.005 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:49.005 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:49.005 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:49.005 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:49.005 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:49.005 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:49.005 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:49.005 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:49.005 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:49.005 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:49.005 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89496 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:49.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89496 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 89496 ']' 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:49.006 01:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.006 [2024-10-15 01:14:01.656837] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:12:49.006 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:49.006 Zero copy mechanism will not be used. 00:12:49.006 [2024-10-15 01:14:01.657005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89496 ] 00:12:49.265 [2024-10-15 01:14:01.800231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.265 [2024-10-15 01:14:01.827144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.265 [2024-10-15 01:14:01.869658] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.265 [2024-10-15 01:14:01.869690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.833 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:49.833 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:12:49.833 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:49.833 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:49.833 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.833 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.833 BaseBdev1_malloc 00:12:49.833 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.833 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:49.833 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.833 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.833 [2024-10-15 01:14:02.492051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:49.833 [2024-10-15 01:14:02.492109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.833 [2024-10-15 01:14:02.492137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:49.833 [2024-10-15 01:14:02.492150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.833 [2024-10-15 01:14:02.494216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.833 [2024-10-15 01:14:02.494249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:49.833 BaseBdev1 00:12:49.833 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.833 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:49.833 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:49.833 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.833 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.833 BaseBdev2_malloc 00:12:49.834 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.834 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:49.834 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.834 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.834 [2024-10-15 01:14:02.520407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:49.834 [2024-10-15 01:14:02.520451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.834 [2024-10-15 01:14:02.520471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:49.834 [2024-10-15 01:14:02.520479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.834 [2024-10-15 01:14:02.522482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.834 [2024-10-15 01:14:02.522523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:49.834 BaseBdev2 00:12:49.834 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.834 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:49.834 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:49.834 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.834 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.834 BaseBdev3_malloc 00:12:49.834 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.834 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:49.834 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.834 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.834 [2024-10-15 01:14:02.548895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:49.834 [2024-10-15 01:14:02.548943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.834 [2024-10-15 01:14:02.548991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:49.834 [2024-10-15 01:14:02.549000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.834 [2024-10-15 01:14:02.550993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.834 [2024-10-15 01:14:02.551029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:49.834 BaseBdev3 00:12:49.834 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.834 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:49.834 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:49.834 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.834 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.094 BaseBdev4_malloc 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.094 [2024-10-15 01:14:02.594719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:50.094 [2024-10-15 01:14:02.594793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.094 [2024-10-15 01:14:02.594833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:50.094 [2024-10-15 01:14:02.594848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.094 [2024-10-15 01:14:02.598359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.094 [2024-10-15 01:14:02.598413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:50.094 BaseBdev4 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.094 spare_malloc 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.094 spare_delay 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.094 [2024-10-15 01:14:02.635587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:50.094 [2024-10-15 01:14:02.635631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.094 [2024-10-15 01:14:02.635667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:50.094 [2024-10-15 01:14:02.635676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.094 [2024-10-15 01:14:02.637796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.094 [2024-10-15 01:14:02.637830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:50.094 spare 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.094 [2024-10-15 01:14:02.647624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:50.094 [2024-10-15 01:14:02.649438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.094 [2024-10-15 01:14:02.649497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:50.094 [2024-10-15 01:14:02.649543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:50.094 [2024-10-15 01:14:02.649711] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:50.094 [2024-10-15 01:14:02.649725] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:50.094 [2024-10-15 01:14:02.649978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:50.094 [2024-10-15 01:14:02.650109] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:50.094 [2024-10-15 01:14:02.650120] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:50.094 [2024-10-15 01:14:02.650244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.094 "name": "raid_bdev1", 00:12:50.094 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:12:50.094 "strip_size_kb": 0, 00:12:50.094 "state": "online", 00:12:50.094 "raid_level": "raid1", 00:12:50.094 "superblock": true, 00:12:50.094 "num_base_bdevs": 4, 00:12:50.094 "num_base_bdevs_discovered": 4, 00:12:50.094 "num_base_bdevs_operational": 4, 00:12:50.094 "base_bdevs_list": [ 00:12:50.094 { 00:12:50.094 "name": "BaseBdev1", 00:12:50.094 "uuid": "cfe18d17-b1bb-5186-a25a-8b354ae909fe", 00:12:50.094 "is_configured": true, 00:12:50.094 "data_offset": 2048, 00:12:50.094 "data_size": 63488 00:12:50.094 }, 00:12:50.094 { 00:12:50.094 "name": "BaseBdev2", 00:12:50.094 "uuid": "a323820f-b4e0-5379-82a5-9b04e011c4ea", 00:12:50.094 "is_configured": true, 00:12:50.094 "data_offset": 2048, 00:12:50.094 "data_size": 63488 00:12:50.094 }, 00:12:50.094 { 00:12:50.094 "name": "BaseBdev3", 00:12:50.094 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:12:50.094 "is_configured": true, 00:12:50.094 "data_offset": 2048, 00:12:50.094 "data_size": 63488 00:12:50.094 }, 00:12:50.094 { 00:12:50.094 "name": "BaseBdev4", 00:12:50.094 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:12:50.094 "is_configured": true, 00:12:50.094 "data_offset": 2048, 00:12:50.094 "data_size": 63488 00:12:50.094 } 00:12:50.094 ] 00:12:50.094 }' 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.094 01:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.354 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:50.354 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.354 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.354 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:50.614 [2024-10-15 01:14:03.083189] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.614 [2024-10-15 01:14:03.178671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.614 "name": "raid_bdev1", 00:12:50.614 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:12:50.614 "strip_size_kb": 0, 00:12:50.614 "state": "online", 00:12:50.614 "raid_level": "raid1", 00:12:50.614 "superblock": true, 00:12:50.614 "num_base_bdevs": 4, 00:12:50.614 "num_base_bdevs_discovered": 3, 00:12:50.614 "num_base_bdevs_operational": 3, 00:12:50.614 "base_bdevs_list": [ 00:12:50.614 { 00:12:50.614 "name": null, 00:12:50.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.614 "is_configured": false, 00:12:50.614 "data_offset": 0, 00:12:50.614 "data_size": 63488 00:12:50.614 }, 00:12:50.614 { 00:12:50.614 "name": "BaseBdev2", 00:12:50.614 "uuid": "a323820f-b4e0-5379-82a5-9b04e011c4ea", 00:12:50.614 "is_configured": true, 00:12:50.614 "data_offset": 2048, 00:12:50.614 "data_size": 63488 00:12:50.614 }, 00:12:50.614 { 00:12:50.614 "name": "BaseBdev3", 00:12:50.614 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:12:50.614 "is_configured": true, 00:12:50.614 "data_offset": 2048, 00:12:50.614 "data_size": 63488 00:12:50.614 }, 00:12:50.614 { 00:12:50.614 "name": "BaseBdev4", 00:12:50.614 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:12:50.614 "is_configured": true, 00:12:50.614 "data_offset": 2048, 00:12:50.614 "data_size": 63488 00:12:50.614 } 00:12:50.614 ] 00:12:50.614 }' 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.614 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.614 [2024-10-15 01:14:03.264564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:50.614 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:50.614 Zero copy mechanism will not be used. 00:12:50.614 Running I/O for 60 seconds... 00:12:51.183 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:51.183 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.183 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.183 [2024-10-15 01:14:03.608571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:51.183 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.183 01:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:51.183 [2024-10-15 01:14:03.650865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:12:51.183 [2024-10-15 01:14:03.652891] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:51.183 [2024-10-15 01:14:03.765650] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:51.183 [2024-10-15 01:14:03.766960] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:51.442 [2024-10-15 01:14:03.996788] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:51.442 [2024-10-15 01:14:03.997179] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:51.702 [2024-10-15 01:14:04.253511] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:51.962 146.00 IOPS, 438.00 MiB/s [2024-10-15T01:14:04.686Z] [2024-10-15 01:14:04.491940] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:51.962 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.962 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.962 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.962 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.962 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.962 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.962 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.962 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.962 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.962 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.962 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.962 "name": "raid_bdev1", 00:12:51.962 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:12:51.962 "strip_size_kb": 0, 00:12:51.962 "state": "online", 00:12:51.962 "raid_level": "raid1", 00:12:51.962 "superblock": true, 00:12:51.962 "num_base_bdevs": 4, 00:12:51.962 "num_base_bdevs_discovered": 4, 00:12:51.962 "num_base_bdevs_operational": 4, 00:12:51.962 "process": { 00:12:51.962 "type": "rebuild", 00:12:51.962 "target": "spare", 00:12:51.962 "progress": { 00:12:51.962 "blocks": 12288, 00:12:51.962 "percent": 19 00:12:51.962 } 00:12:51.962 }, 00:12:51.962 "base_bdevs_list": [ 00:12:51.962 { 00:12:51.962 "name": "spare", 00:12:51.962 "uuid": "1a11810c-2325-549c-b26a-3104b2696020", 00:12:51.962 "is_configured": true, 00:12:51.962 "data_offset": 2048, 00:12:51.962 "data_size": 63488 00:12:51.962 }, 00:12:51.962 { 00:12:51.962 "name": "BaseBdev2", 00:12:51.962 "uuid": "a323820f-b4e0-5379-82a5-9b04e011c4ea", 00:12:51.962 "is_configured": true, 00:12:51.962 "data_offset": 2048, 00:12:51.962 "data_size": 63488 00:12:51.962 }, 00:12:51.962 { 00:12:51.962 "name": "BaseBdev3", 00:12:51.962 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:12:51.962 "is_configured": true, 00:12:51.962 "data_offset": 2048, 00:12:51.962 "data_size": 63488 00:12:51.962 }, 00:12:51.962 { 00:12:51.962 "name": "BaseBdev4", 00:12:51.962 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:12:51.962 "is_configured": true, 00:12:51.962 "data_offset": 2048, 00:12:51.962 "data_size": 63488 00:12:51.962 } 00:12:51.962 ] 00:12:51.962 }' 00:12:52.222 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.222 [2024-10-15 01:14:04.728802] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:52.222 [2024-10-15 01:14:04.729362] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:52.222 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.222 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.222 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.222 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:52.222 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.222 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.222 [2024-10-15 01:14:04.770614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.222 [2024-10-15 01:14:04.831375] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:52.222 [2024-10-15 01:14:04.832515] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:52.222 [2024-10-15 01:14:04.933833] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:52.222 [2024-10-15 01:14:04.942682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.222 [2024-10-15 01:14:04.942727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.222 [2024-10-15 01:14:04.942739] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:52.483 [2024-10-15 01:14:04.973221] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:12:52.483 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.483 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:52.483 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.483 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.483 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.483 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.483 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.483 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.483 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.483 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.483 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.483 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.483 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.483 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.483 01:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.483 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.483 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.483 "name": "raid_bdev1", 00:12:52.483 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:12:52.483 "strip_size_kb": 0, 00:12:52.483 "state": "online", 00:12:52.483 "raid_level": "raid1", 00:12:52.483 "superblock": true, 00:12:52.483 "num_base_bdevs": 4, 00:12:52.483 "num_base_bdevs_discovered": 3, 00:12:52.483 "num_base_bdevs_operational": 3, 00:12:52.483 "base_bdevs_list": [ 00:12:52.483 { 00:12:52.483 "name": null, 00:12:52.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.483 "is_configured": false, 00:12:52.483 "data_offset": 0, 00:12:52.483 "data_size": 63488 00:12:52.483 }, 00:12:52.483 { 00:12:52.483 "name": "BaseBdev2", 00:12:52.483 "uuid": "a323820f-b4e0-5379-82a5-9b04e011c4ea", 00:12:52.483 "is_configured": true, 00:12:52.483 "data_offset": 2048, 00:12:52.483 "data_size": 63488 00:12:52.483 }, 00:12:52.483 { 00:12:52.483 "name": "BaseBdev3", 00:12:52.483 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:12:52.483 "is_configured": true, 00:12:52.483 "data_offset": 2048, 00:12:52.483 "data_size": 63488 00:12:52.483 }, 00:12:52.483 { 00:12:52.483 "name": "BaseBdev4", 00:12:52.483 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:12:52.483 "is_configured": true, 00:12:52.483 "data_offset": 2048, 00:12:52.483 "data_size": 63488 00:12:52.483 } 00:12:52.483 ] 00:12:52.483 }' 00:12:52.483 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.483 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.743 141.50 IOPS, 424.50 MiB/s [2024-10-15T01:14:05.467Z] 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:52.743 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.743 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:52.743 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:52.743 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.743 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.743 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.743 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.743 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.004 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.004 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.004 "name": "raid_bdev1", 00:12:53.004 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:12:53.004 "strip_size_kb": 0, 00:12:53.004 "state": "online", 00:12:53.004 "raid_level": "raid1", 00:12:53.004 "superblock": true, 00:12:53.004 "num_base_bdevs": 4, 00:12:53.004 "num_base_bdevs_discovered": 3, 00:12:53.004 "num_base_bdevs_operational": 3, 00:12:53.004 "base_bdevs_list": [ 00:12:53.004 { 00:12:53.004 "name": null, 00:12:53.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.004 "is_configured": false, 00:12:53.004 "data_offset": 0, 00:12:53.004 "data_size": 63488 00:12:53.004 }, 00:12:53.004 { 00:12:53.004 "name": "BaseBdev2", 00:12:53.004 "uuid": "a323820f-b4e0-5379-82a5-9b04e011c4ea", 00:12:53.004 "is_configured": true, 00:12:53.004 "data_offset": 2048, 00:12:53.004 "data_size": 63488 00:12:53.004 }, 00:12:53.004 { 00:12:53.004 "name": "BaseBdev3", 00:12:53.004 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:12:53.004 "is_configured": true, 00:12:53.004 "data_offset": 2048, 00:12:53.004 "data_size": 63488 00:12:53.004 }, 00:12:53.004 { 00:12:53.004 "name": "BaseBdev4", 00:12:53.004 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:12:53.004 "is_configured": true, 00:12:53.004 "data_offset": 2048, 00:12:53.004 "data_size": 63488 00:12:53.004 } 00:12:53.004 ] 00:12:53.004 }' 00:12:53.004 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.004 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:53.004 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.004 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:53.004 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:53.004 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.004 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.004 [2024-10-15 01:14:05.570493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:53.004 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.004 01:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:53.004 [2024-10-15 01:14:05.611764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:12:53.004 [2024-10-15 01:14:05.613728] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:53.264 [2024-10-15 01:14:05.734198] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:53.264 [2024-10-15 01:14:05.735529] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:53.264 [2024-10-15 01:14:05.953006] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:53.264 [2024-10-15 01:14:05.953410] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:53.833 162.00 IOPS, 486.00 MiB/s [2024-10-15T01:14:06.557Z] [2024-10-15 01:14:06.293135] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:53.833 [2024-10-15 01:14:06.408717] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.093 "name": "raid_bdev1", 00:12:54.093 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:12:54.093 "strip_size_kb": 0, 00:12:54.093 "state": "online", 00:12:54.093 "raid_level": "raid1", 00:12:54.093 "superblock": true, 00:12:54.093 "num_base_bdevs": 4, 00:12:54.093 "num_base_bdevs_discovered": 4, 00:12:54.093 "num_base_bdevs_operational": 4, 00:12:54.093 "process": { 00:12:54.093 "type": "rebuild", 00:12:54.093 "target": "spare", 00:12:54.093 "progress": { 00:12:54.093 "blocks": 12288, 00:12:54.093 "percent": 19 00:12:54.093 } 00:12:54.093 }, 00:12:54.093 "base_bdevs_list": [ 00:12:54.093 { 00:12:54.093 "name": "spare", 00:12:54.093 "uuid": "1a11810c-2325-549c-b26a-3104b2696020", 00:12:54.093 "is_configured": true, 00:12:54.093 "data_offset": 2048, 00:12:54.093 "data_size": 63488 00:12:54.093 }, 00:12:54.093 { 00:12:54.093 "name": "BaseBdev2", 00:12:54.093 "uuid": "a323820f-b4e0-5379-82a5-9b04e011c4ea", 00:12:54.093 "is_configured": true, 00:12:54.093 "data_offset": 2048, 00:12:54.093 "data_size": 63488 00:12:54.093 }, 00:12:54.093 { 00:12:54.093 "name": "BaseBdev3", 00:12:54.093 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:12:54.093 "is_configured": true, 00:12:54.093 "data_offset": 2048, 00:12:54.093 "data_size": 63488 00:12:54.093 }, 00:12:54.093 { 00:12:54.093 "name": "BaseBdev4", 00:12:54.093 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:12:54.093 "is_configured": true, 00:12:54.093 "data_offset": 2048, 00:12:54.093 "data_size": 63488 00:12:54.093 } 00:12:54.093 ] 00:12:54.093 }' 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:54.093 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.093 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.093 [2024-10-15 01:14:06.730592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:54.093 [2024-10-15 01:14:06.741268] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:54.093 [2024-10-15 01:14:06.741825] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:54.353 [2024-10-15 01:14:06.948617] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:12:54.353 [2024-10-15 01:14:06.948651] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:12:54.353 [2024-10-15 01:14:06.949822] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:54.353 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.353 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:54.353 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:54.353 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.353 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.353 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.354 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.354 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.354 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.354 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.354 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.354 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.354 01:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.354 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.354 "name": "raid_bdev1", 00:12:54.354 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:12:54.354 "strip_size_kb": 0, 00:12:54.354 "state": "online", 00:12:54.354 "raid_level": "raid1", 00:12:54.354 "superblock": true, 00:12:54.354 "num_base_bdevs": 4, 00:12:54.354 "num_base_bdevs_discovered": 3, 00:12:54.354 "num_base_bdevs_operational": 3, 00:12:54.354 "process": { 00:12:54.354 "type": "rebuild", 00:12:54.354 "target": "spare", 00:12:54.354 "progress": { 00:12:54.354 "blocks": 16384, 00:12:54.354 "percent": 25 00:12:54.354 } 00:12:54.354 }, 00:12:54.354 "base_bdevs_list": [ 00:12:54.354 { 00:12:54.354 "name": "spare", 00:12:54.354 "uuid": "1a11810c-2325-549c-b26a-3104b2696020", 00:12:54.354 "is_configured": true, 00:12:54.354 "data_offset": 2048, 00:12:54.354 "data_size": 63488 00:12:54.354 }, 00:12:54.354 { 00:12:54.354 "name": null, 00:12:54.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.354 "is_configured": false, 00:12:54.354 "data_offset": 0, 00:12:54.354 "data_size": 63488 00:12:54.354 }, 00:12:54.354 { 00:12:54.354 "name": "BaseBdev3", 00:12:54.354 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:12:54.354 "is_configured": true, 00:12:54.354 "data_offset": 2048, 00:12:54.354 "data_size": 63488 00:12:54.354 }, 00:12:54.354 { 00:12:54.354 "name": "BaseBdev4", 00:12:54.354 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:12:54.354 "is_configured": true, 00:12:54.354 "data_offset": 2048, 00:12:54.354 "data_size": 63488 00:12:54.354 } 00:12:54.354 ] 00:12:54.354 }' 00:12:54.354 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.354 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.354 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.354 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.354 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=399 00:12:54.354 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:54.354 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.354 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.354 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.354 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.354 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.354 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.354 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.354 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.354 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.614 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.614 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.614 "name": "raid_bdev1", 00:12:54.614 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:12:54.614 "strip_size_kb": 0, 00:12:54.614 "state": "online", 00:12:54.614 "raid_level": "raid1", 00:12:54.614 "superblock": true, 00:12:54.614 "num_base_bdevs": 4, 00:12:54.614 "num_base_bdevs_discovered": 3, 00:12:54.614 "num_base_bdevs_operational": 3, 00:12:54.614 "process": { 00:12:54.614 "type": "rebuild", 00:12:54.614 "target": "spare", 00:12:54.614 "progress": { 00:12:54.614 "blocks": 16384, 00:12:54.614 "percent": 25 00:12:54.614 } 00:12:54.614 }, 00:12:54.614 "base_bdevs_list": [ 00:12:54.614 { 00:12:54.614 "name": "spare", 00:12:54.614 "uuid": "1a11810c-2325-549c-b26a-3104b2696020", 00:12:54.614 "is_configured": true, 00:12:54.614 "data_offset": 2048, 00:12:54.614 "data_size": 63488 00:12:54.614 }, 00:12:54.614 { 00:12:54.614 "name": null, 00:12:54.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.614 "is_configured": false, 00:12:54.614 "data_offset": 0, 00:12:54.614 "data_size": 63488 00:12:54.614 }, 00:12:54.614 { 00:12:54.614 "name": "BaseBdev3", 00:12:54.614 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:12:54.614 "is_configured": true, 00:12:54.614 "data_offset": 2048, 00:12:54.614 "data_size": 63488 00:12:54.614 }, 00:12:54.614 { 00:12:54.614 "name": "BaseBdev4", 00:12:54.614 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:12:54.614 "is_configured": true, 00:12:54.614 "data_offset": 2048, 00:12:54.614 "data_size": 63488 00:12:54.614 } 00:12:54.614 ] 00:12:54.614 }' 00:12:54.614 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.614 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.614 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.614 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.614 01:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:54.874 143.00 IOPS, 429.00 MiB/s [2024-10-15T01:14:07.598Z] [2024-10-15 01:14:07.375047] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:54.874 [2024-10-15 01:14:07.375334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:55.134 [2024-10-15 01:14:07.697021] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:55.709 [2024-10-15 01:14:08.137084] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:55.709 01:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:55.709 01:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.709 01:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.709 01:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.709 01:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.709 01:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.709 01:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.709 01:14:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.709 01:14:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.709 01:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.709 01:14:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.709 128.20 IOPS, 384.60 MiB/s [2024-10-15T01:14:08.433Z] 01:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.709 "name": "raid_bdev1", 00:12:55.709 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:12:55.709 "strip_size_kb": 0, 00:12:55.709 "state": "online", 00:12:55.709 "raid_level": "raid1", 00:12:55.709 "superblock": true, 00:12:55.709 "num_base_bdevs": 4, 00:12:55.709 "num_base_bdevs_discovered": 3, 00:12:55.709 "num_base_bdevs_operational": 3, 00:12:55.709 "process": { 00:12:55.709 "type": "rebuild", 00:12:55.709 "target": "spare", 00:12:55.709 "progress": { 00:12:55.709 "blocks": 32768, 00:12:55.709 "percent": 51 00:12:55.709 } 00:12:55.709 }, 00:12:55.709 "base_bdevs_list": [ 00:12:55.709 { 00:12:55.709 "name": "spare", 00:12:55.709 "uuid": "1a11810c-2325-549c-b26a-3104b2696020", 00:12:55.709 "is_configured": true, 00:12:55.709 "data_offset": 2048, 00:12:55.709 "data_size": 63488 00:12:55.709 }, 00:12:55.709 { 00:12:55.709 "name": null, 00:12:55.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.709 "is_configured": false, 00:12:55.709 "data_offset": 0, 00:12:55.709 "data_size": 63488 00:12:55.709 }, 00:12:55.709 { 00:12:55.709 "name": "BaseBdev3", 00:12:55.709 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:12:55.709 "is_configured": true, 00:12:55.709 "data_offset": 2048, 00:12:55.709 "data_size": 63488 00:12:55.709 }, 00:12:55.709 { 00:12:55.709 "name": "BaseBdev4", 00:12:55.709 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:12:55.709 "is_configured": true, 00:12:55.709 "data_offset": 2048, 00:12:55.709 "data_size": 63488 00:12:55.709 } 00:12:55.709 ] 00:12:55.709 }' 00:12:55.709 01:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.709 01:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.709 01:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.709 01:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.709 01:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:55.973 [2024-10-15 01:14:08.471262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:56.803 114.67 IOPS, 344.00 MiB/s [2024-10-15T01:14:09.527Z] 01:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:56.803 01:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.803 01:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.803 01:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.803 01:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.803 01:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.803 01:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.803 01:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.803 01:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.803 01:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.803 01:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.803 01:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.803 "name": "raid_bdev1", 00:12:56.803 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:12:56.803 "strip_size_kb": 0, 00:12:56.803 "state": "online", 00:12:56.803 "raid_level": "raid1", 00:12:56.803 "superblock": true, 00:12:56.803 "num_base_bdevs": 4, 00:12:56.803 "num_base_bdevs_discovered": 3, 00:12:56.803 "num_base_bdevs_operational": 3, 00:12:56.803 "process": { 00:12:56.803 "type": "rebuild", 00:12:56.803 "target": "spare", 00:12:56.803 "progress": { 00:12:56.803 "blocks": 53248, 00:12:56.803 "percent": 83 00:12:56.803 } 00:12:56.803 }, 00:12:56.803 "base_bdevs_list": [ 00:12:56.803 { 00:12:56.803 "name": "spare", 00:12:56.803 "uuid": "1a11810c-2325-549c-b26a-3104b2696020", 00:12:56.803 "is_configured": true, 00:12:56.803 "data_offset": 2048, 00:12:56.803 "data_size": 63488 00:12:56.803 }, 00:12:56.803 { 00:12:56.803 "name": null, 00:12:56.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.803 "is_configured": false, 00:12:56.803 "data_offset": 0, 00:12:56.803 "data_size": 63488 00:12:56.803 }, 00:12:56.803 { 00:12:56.803 "name": "BaseBdev3", 00:12:56.803 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:12:56.803 "is_configured": true, 00:12:56.803 "data_offset": 2048, 00:12:56.803 "data_size": 63488 00:12:56.803 }, 00:12:56.803 { 00:12:56.803 "name": "BaseBdev4", 00:12:56.803 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:12:56.803 "is_configured": true, 00:12:56.803 "data_offset": 2048, 00:12:56.803 "data_size": 63488 00:12:56.803 } 00:12:56.804 ] 00:12:56.804 }' 00:12:56.804 01:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.804 01:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:56.804 01:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.804 01:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:56.804 01:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:57.063 [2024-10-15 01:14:09.570156] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:57.063 [2024-10-15 01:14:09.674892] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:57.323 [2024-10-15 01:14:09.902985] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:57.323 [2024-10-15 01:14:10.007760] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:57.323 [2024-10-15 01:14:10.009953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.841 102.71 IOPS, 308.14 MiB/s [2024-10-15T01:14:10.565Z] 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:57.841 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.841 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.841 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.841 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.841 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.841 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.841 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.841 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.841 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.841 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.841 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.841 "name": "raid_bdev1", 00:12:57.841 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:12:57.841 "strip_size_kb": 0, 00:12:57.841 "state": "online", 00:12:57.841 "raid_level": "raid1", 00:12:57.841 "superblock": true, 00:12:57.841 "num_base_bdevs": 4, 00:12:57.841 "num_base_bdevs_discovered": 3, 00:12:57.841 "num_base_bdevs_operational": 3, 00:12:57.841 "base_bdevs_list": [ 00:12:57.841 { 00:12:57.841 "name": "spare", 00:12:57.841 "uuid": "1a11810c-2325-549c-b26a-3104b2696020", 00:12:57.841 "is_configured": true, 00:12:57.841 "data_offset": 2048, 00:12:57.841 "data_size": 63488 00:12:57.841 }, 00:12:57.841 { 00:12:57.841 "name": null, 00:12:57.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.841 "is_configured": false, 00:12:57.841 "data_offset": 0, 00:12:57.841 "data_size": 63488 00:12:57.841 }, 00:12:57.841 { 00:12:57.841 "name": "BaseBdev3", 00:12:57.841 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:12:57.841 "is_configured": true, 00:12:57.841 "data_offset": 2048, 00:12:57.841 "data_size": 63488 00:12:57.841 }, 00:12:57.841 { 00:12:57.841 "name": "BaseBdev4", 00:12:57.841 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:12:57.841 "is_configured": true, 00:12:57.841 "data_offset": 2048, 00:12:57.841 "data_size": 63488 00:12:57.841 } 00:12:57.841 ] 00:12:57.841 }' 00:12:57.841 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.099 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:58.099 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.099 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:58.099 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:58.099 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:58.099 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.099 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:58.099 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:58.099 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.099 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.099 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.099 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.099 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.099 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.099 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.099 "name": "raid_bdev1", 00:12:58.099 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:12:58.099 "strip_size_kb": 0, 00:12:58.099 "state": "online", 00:12:58.099 "raid_level": "raid1", 00:12:58.099 "superblock": true, 00:12:58.099 "num_base_bdevs": 4, 00:12:58.099 "num_base_bdevs_discovered": 3, 00:12:58.099 "num_base_bdevs_operational": 3, 00:12:58.099 "base_bdevs_list": [ 00:12:58.099 { 00:12:58.099 "name": "spare", 00:12:58.099 "uuid": "1a11810c-2325-549c-b26a-3104b2696020", 00:12:58.099 "is_configured": true, 00:12:58.099 "data_offset": 2048, 00:12:58.099 "data_size": 63488 00:12:58.099 }, 00:12:58.099 { 00:12:58.099 "name": null, 00:12:58.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.099 "is_configured": false, 00:12:58.099 "data_offset": 0, 00:12:58.099 "data_size": 63488 00:12:58.099 }, 00:12:58.099 { 00:12:58.099 "name": "BaseBdev3", 00:12:58.099 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:12:58.099 "is_configured": true, 00:12:58.099 "data_offset": 2048, 00:12:58.099 "data_size": 63488 00:12:58.099 }, 00:12:58.099 { 00:12:58.099 "name": "BaseBdev4", 00:12:58.099 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:12:58.099 "is_configured": true, 00:12:58.099 "data_offset": 2048, 00:12:58.099 "data_size": 63488 00:12:58.099 } 00:12:58.099 ] 00:12:58.099 }' 00:12:58.099 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.099 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:58.099 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.099 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:58.099 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:58.099 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.100 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.100 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.100 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.100 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.100 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.100 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.100 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.100 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.100 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.100 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.100 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.100 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.100 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.100 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.100 "name": "raid_bdev1", 00:12:58.100 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:12:58.100 "strip_size_kb": 0, 00:12:58.100 "state": "online", 00:12:58.100 "raid_level": "raid1", 00:12:58.100 "superblock": true, 00:12:58.100 "num_base_bdevs": 4, 00:12:58.100 "num_base_bdevs_discovered": 3, 00:12:58.100 "num_base_bdevs_operational": 3, 00:12:58.100 "base_bdevs_list": [ 00:12:58.100 { 00:12:58.100 "name": "spare", 00:12:58.100 "uuid": "1a11810c-2325-549c-b26a-3104b2696020", 00:12:58.100 "is_configured": true, 00:12:58.100 "data_offset": 2048, 00:12:58.100 "data_size": 63488 00:12:58.100 }, 00:12:58.100 { 00:12:58.100 "name": null, 00:12:58.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.100 "is_configured": false, 00:12:58.100 "data_offset": 0, 00:12:58.100 "data_size": 63488 00:12:58.100 }, 00:12:58.100 { 00:12:58.100 "name": "BaseBdev3", 00:12:58.100 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:12:58.100 "is_configured": true, 00:12:58.100 "data_offset": 2048, 00:12:58.100 "data_size": 63488 00:12:58.100 }, 00:12:58.100 { 00:12:58.100 "name": "BaseBdev4", 00:12:58.100 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:12:58.100 "is_configured": true, 00:12:58.100 "data_offset": 2048, 00:12:58.100 "data_size": 63488 00:12:58.100 } 00:12:58.100 ] 00:12:58.100 }' 00:12:58.100 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.100 01:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.669 [2024-10-15 01:14:11.183298] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:58.669 [2024-10-15 01:14:11.183329] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:58.669 00:12:58.669 Latency(us) 00:12:58.669 [2024-10-15T01:14:11.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.669 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:58.669 raid_bdev1 : 7.94 96.30 288.90 0.00 0.00 14219.72 271.87 118136.51 00:12:58.669 [2024-10-15T01:14:11.393Z] =================================================================================================================== 00:12:58.669 [2024-10-15T01:14:11.393Z] Total : 96.30 288.90 0.00 0.00 14219.72 271.87 118136.51 00:12:58.669 { 00:12:58.669 "results": [ 00:12:58.669 { 00:12:58.669 "job": "raid_bdev1", 00:12:58.669 "core_mask": "0x1", 00:12:58.669 "workload": "randrw", 00:12:58.669 "percentage": 50, 00:12:58.669 "status": "finished", 00:12:58.669 "queue_depth": 2, 00:12:58.669 "io_size": 3145728, 00:12:58.669 "runtime": 7.943871, 00:12:58.669 "iops": 96.30065745025315, 00:12:58.669 "mibps": 288.9019723507595, 00:12:58.669 "io_failed": 0, 00:12:58.669 "io_timeout": 0, 00:12:58.669 "avg_latency_us": 14219.723962668037, 00:12:58.669 "min_latency_us": 271.87423580786026, 00:12:58.669 "max_latency_us": 118136.51004366812 00:12:58.669 } 00:12:58.669 ], 00:12:58.669 "core_count": 1 00:12:58.669 } 00:12:58.669 [2024-10-15 01:14:11.198515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.669 [2024-10-15 01:14:11.198552] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.669 [2024-10-15 01:14:11.198667] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.669 [2024-10-15 01:14:11.198676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.669 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:58.930 /dev/nbd0 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.930 1+0 records in 00:12:58.930 1+0 records out 00:12:58.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343597 s, 11.9 MB/s 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.930 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:59.190 /dev/nbd1 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.190 1+0 records in 00:12:59.190 1+0 records out 00:12:59.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030927 s, 13.2 MB/s 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.190 01:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:59.450 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:59.450 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:59.450 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:59.450 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.450 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.450 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:59.450 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:59.450 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.450 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:59.450 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:59.450 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:59.450 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.450 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:59.450 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:59.450 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:59.450 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:59.450 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:59.450 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:59.450 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:59.450 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:59.710 /dev/nbd1 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.710 1+0 records in 00:12:59.710 1+0 records out 00:12:59.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619505 s, 6.6 MB/s 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.710 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:59.970 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:59.970 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:59.970 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:59.970 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.970 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.970 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:59.970 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:59.970 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.970 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:59.970 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.970 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:59.970 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.970 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:59.970 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.970 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.230 [2024-10-15 01:14:12.752306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:00.230 [2024-10-15 01:14:12.752398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.230 [2024-10-15 01:14:12.752454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:00.230 [2024-10-15 01:14:12.752503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.230 [2024-10-15 01:14:12.754675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.230 [2024-10-15 01:14:12.754745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:00.230 [2024-10-15 01:14:12.754870] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:00.230 [2024-10-15 01:14:12.754947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:00.230 [2024-10-15 01:14:12.755122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:00.230 [2024-10-15 01:14:12.755267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:00.230 spare 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.230 [2024-10-15 01:14:12.855194] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:13:00.230 [2024-10-15 01:14:12.855251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:00.230 [2024-10-15 01:14:12.855570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000337b0 00:13:00.230 [2024-10-15 01:14:12.855735] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:13:00.230 [2024-10-15 01:14:12.855790] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:13:00.230 [2024-10-15 01:14:12.855950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.230 "name": "raid_bdev1", 00:13:00.230 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:13:00.230 "strip_size_kb": 0, 00:13:00.230 "state": "online", 00:13:00.230 "raid_level": "raid1", 00:13:00.230 "superblock": true, 00:13:00.230 "num_base_bdevs": 4, 00:13:00.230 "num_base_bdevs_discovered": 3, 00:13:00.230 "num_base_bdevs_operational": 3, 00:13:00.230 "base_bdevs_list": [ 00:13:00.230 { 00:13:00.230 "name": "spare", 00:13:00.230 "uuid": "1a11810c-2325-549c-b26a-3104b2696020", 00:13:00.230 "is_configured": true, 00:13:00.230 "data_offset": 2048, 00:13:00.230 "data_size": 63488 00:13:00.230 }, 00:13:00.230 { 00:13:00.230 "name": null, 00:13:00.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.230 "is_configured": false, 00:13:00.230 "data_offset": 2048, 00:13:00.230 "data_size": 63488 00:13:00.230 }, 00:13:00.230 { 00:13:00.230 "name": "BaseBdev3", 00:13:00.230 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:13:00.230 "is_configured": true, 00:13:00.230 "data_offset": 2048, 00:13:00.230 "data_size": 63488 00:13:00.230 }, 00:13:00.230 { 00:13:00.230 "name": "BaseBdev4", 00:13:00.230 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:13:00.230 "is_configured": true, 00:13:00.230 "data_offset": 2048, 00:13:00.230 "data_size": 63488 00:13:00.230 } 00:13:00.230 ] 00:13:00.230 }' 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.230 01:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.800 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:00.800 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.800 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:00.800 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:00.800 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.800 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.800 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.800 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.800 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.800 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.800 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.800 "name": "raid_bdev1", 00:13:00.800 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:13:00.800 "strip_size_kb": 0, 00:13:00.800 "state": "online", 00:13:00.800 "raid_level": "raid1", 00:13:00.800 "superblock": true, 00:13:00.800 "num_base_bdevs": 4, 00:13:00.800 "num_base_bdevs_discovered": 3, 00:13:00.800 "num_base_bdevs_operational": 3, 00:13:00.800 "base_bdevs_list": [ 00:13:00.800 { 00:13:00.800 "name": "spare", 00:13:00.800 "uuid": "1a11810c-2325-549c-b26a-3104b2696020", 00:13:00.800 "is_configured": true, 00:13:00.800 "data_offset": 2048, 00:13:00.800 "data_size": 63488 00:13:00.800 }, 00:13:00.800 { 00:13:00.800 "name": null, 00:13:00.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.800 "is_configured": false, 00:13:00.800 "data_offset": 2048, 00:13:00.800 "data_size": 63488 00:13:00.800 }, 00:13:00.800 { 00:13:00.800 "name": "BaseBdev3", 00:13:00.800 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:13:00.800 "is_configured": true, 00:13:00.800 "data_offset": 2048, 00:13:00.800 "data_size": 63488 00:13:00.800 }, 00:13:00.800 { 00:13:00.800 "name": "BaseBdev4", 00:13:00.800 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:13:00.800 "is_configured": true, 00:13:00.800 "data_offset": 2048, 00:13:00.800 "data_size": 63488 00:13:00.800 } 00:13:00.800 ] 00:13:00.800 }' 00:13:00.800 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.800 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:00.800 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.801 [2024-10-15 01:14:13.399451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.801 "name": "raid_bdev1", 00:13:00.801 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:13:00.801 "strip_size_kb": 0, 00:13:00.801 "state": "online", 00:13:00.801 "raid_level": "raid1", 00:13:00.801 "superblock": true, 00:13:00.801 "num_base_bdevs": 4, 00:13:00.801 "num_base_bdevs_discovered": 2, 00:13:00.801 "num_base_bdevs_operational": 2, 00:13:00.801 "base_bdevs_list": [ 00:13:00.801 { 00:13:00.801 "name": null, 00:13:00.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.801 "is_configured": false, 00:13:00.801 "data_offset": 0, 00:13:00.801 "data_size": 63488 00:13:00.801 }, 00:13:00.801 { 00:13:00.801 "name": null, 00:13:00.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.801 "is_configured": false, 00:13:00.801 "data_offset": 2048, 00:13:00.801 "data_size": 63488 00:13:00.801 }, 00:13:00.801 { 00:13:00.801 "name": "BaseBdev3", 00:13:00.801 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:13:00.801 "is_configured": true, 00:13:00.801 "data_offset": 2048, 00:13:00.801 "data_size": 63488 00:13:00.801 }, 00:13:00.801 { 00:13:00.801 "name": "BaseBdev4", 00:13:00.801 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:13:00.801 "is_configured": true, 00:13:00.801 "data_offset": 2048, 00:13:00.801 "data_size": 63488 00:13:00.801 } 00:13:00.801 ] 00:13:00.801 }' 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.801 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.371 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:01.371 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.371 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.371 [2024-10-15 01:14:13.807026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.371 [2024-10-15 01:14:13.807275] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:01.371 [2024-10-15 01:14:13.807336] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:01.371 [2024-10-15 01:14:13.807438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.371 [2024-10-15 01:14:13.811900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033880 00:13:01.371 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.371 01:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:01.371 [2024-10-15 01:14:13.813747] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:02.310 01:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.310 01:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.310 01:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.311 01:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.311 01:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.311 01:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.311 01:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.311 01:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.311 01:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.311 01:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.311 01:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.311 "name": "raid_bdev1", 00:13:02.311 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:13:02.311 "strip_size_kb": 0, 00:13:02.311 "state": "online", 00:13:02.311 "raid_level": "raid1", 00:13:02.311 "superblock": true, 00:13:02.311 "num_base_bdevs": 4, 00:13:02.311 "num_base_bdevs_discovered": 3, 00:13:02.311 "num_base_bdevs_operational": 3, 00:13:02.311 "process": { 00:13:02.311 "type": "rebuild", 00:13:02.311 "target": "spare", 00:13:02.311 "progress": { 00:13:02.311 "blocks": 20480, 00:13:02.311 "percent": 32 00:13:02.311 } 00:13:02.311 }, 00:13:02.311 "base_bdevs_list": [ 00:13:02.311 { 00:13:02.311 "name": "spare", 00:13:02.311 "uuid": "1a11810c-2325-549c-b26a-3104b2696020", 00:13:02.311 "is_configured": true, 00:13:02.311 "data_offset": 2048, 00:13:02.311 "data_size": 63488 00:13:02.311 }, 00:13:02.311 { 00:13:02.311 "name": null, 00:13:02.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.311 "is_configured": false, 00:13:02.311 "data_offset": 2048, 00:13:02.311 "data_size": 63488 00:13:02.311 }, 00:13:02.311 { 00:13:02.311 "name": "BaseBdev3", 00:13:02.311 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:13:02.311 "is_configured": true, 00:13:02.311 "data_offset": 2048, 00:13:02.311 "data_size": 63488 00:13:02.311 }, 00:13:02.311 { 00:13:02.311 "name": "BaseBdev4", 00:13:02.311 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:13:02.311 "is_configured": true, 00:13:02.311 "data_offset": 2048, 00:13:02.311 "data_size": 63488 00:13:02.311 } 00:13:02.311 ] 00:13:02.311 }' 00:13:02.311 01:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.311 01:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.311 01:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.311 01:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.311 01:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:02.311 01:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.311 01:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.311 [2024-10-15 01:14:14.966012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:02.311 [2024-10-15 01:14:15.017848] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:02.311 [2024-10-15 01:14:15.017902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.311 [2024-10-15 01:14:15.017921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:02.311 [2024-10-15 01:14:15.017928] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:02.311 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.311 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:02.311 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.311 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.311 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.311 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.311 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:02.311 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.311 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.311 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.311 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.571 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.571 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.571 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.571 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.571 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.571 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.571 "name": "raid_bdev1", 00:13:02.571 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:13:02.571 "strip_size_kb": 0, 00:13:02.571 "state": "online", 00:13:02.571 "raid_level": "raid1", 00:13:02.571 "superblock": true, 00:13:02.571 "num_base_bdevs": 4, 00:13:02.571 "num_base_bdevs_discovered": 2, 00:13:02.571 "num_base_bdevs_operational": 2, 00:13:02.571 "base_bdevs_list": [ 00:13:02.571 { 00:13:02.571 "name": null, 00:13:02.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.571 "is_configured": false, 00:13:02.571 "data_offset": 0, 00:13:02.571 "data_size": 63488 00:13:02.571 }, 00:13:02.571 { 00:13:02.571 "name": null, 00:13:02.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.571 "is_configured": false, 00:13:02.571 "data_offset": 2048, 00:13:02.571 "data_size": 63488 00:13:02.571 }, 00:13:02.571 { 00:13:02.571 "name": "BaseBdev3", 00:13:02.571 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:13:02.571 "is_configured": true, 00:13:02.571 "data_offset": 2048, 00:13:02.571 "data_size": 63488 00:13:02.571 }, 00:13:02.571 { 00:13:02.571 "name": "BaseBdev4", 00:13:02.571 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:13:02.571 "is_configured": true, 00:13:02.571 "data_offset": 2048, 00:13:02.571 "data_size": 63488 00:13:02.571 } 00:13:02.571 ] 00:13:02.571 }' 00:13:02.571 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.571 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.831 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:02.831 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.831 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.831 [2024-10-15 01:14:15.469589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:02.831 [2024-10-15 01:14:15.469693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.831 [2024-10-15 01:14:15.469732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:02.831 [2024-10-15 01:14:15.469759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.831 [2024-10-15 01:14:15.470240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.831 [2024-10-15 01:14:15.470295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:02.831 [2024-10-15 01:14:15.470407] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:02.831 [2024-10-15 01:14:15.470446] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:02.831 [2024-10-15 01:14:15.470489] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:02.831 [2024-10-15 01:14:15.470539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:02.831 [2024-10-15 01:14:15.474985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033950 00:13:02.831 spare 00:13:02.831 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.831 01:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:02.831 [2024-10-15 01:14:15.476955] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:03.768 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.768 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.768 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.768 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.768 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.768 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.768 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.768 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.769 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.028 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.028 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.028 "name": "raid_bdev1", 00:13:04.028 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:13:04.028 "strip_size_kb": 0, 00:13:04.028 "state": "online", 00:13:04.028 "raid_level": "raid1", 00:13:04.028 "superblock": true, 00:13:04.028 "num_base_bdevs": 4, 00:13:04.028 "num_base_bdevs_discovered": 3, 00:13:04.028 "num_base_bdevs_operational": 3, 00:13:04.028 "process": { 00:13:04.028 "type": "rebuild", 00:13:04.028 "target": "spare", 00:13:04.028 "progress": { 00:13:04.028 "blocks": 20480, 00:13:04.028 "percent": 32 00:13:04.028 } 00:13:04.028 }, 00:13:04.028 "base_bdevs_list": [ 00:13:04.028 { 00:13:04.028 "name": "spare", 00:13:04.028 "uuid": "1a11810c-2325-549c-b26a-3104b2696020", 00:13:04.028 "is_configured": true, 00:13:04.028 "data_offset": 2048, 00:13:04.028 "data_size": 63488 00:13:04.028 }, 00:13:04.028 { 00:13:04.028 "name": null, 00:13:04.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.028 "is_configured": false, 00:13:04.028 "data_offset": 2048, 00:13:04.028 "data_size": 63488 00:13:04.028 }, 00:13:04.028 { 00:13:04.028 "name": "BaseBdev3", 00:13:04.028 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:13:04.028 "is_configured": true, 00:13:04.028 "data_offset": 2048, 00:13:04.028 "data_size": 63488 00:13:04.028 }, 00:13:04.028 { 00:13:04.028 "name": "BaseBdev4", 00:13:04.028 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:13:04.028 "is_configured": true, 00:13:04.028 "data_offset": 2048, 00:13:04.028 "data_size": 63488 00:13:04.028 } 00:13:04.028 ] 00:13:04.028 }' 00:13:04.028 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.028 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.028 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.028 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.028 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:04.029 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.029 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.029 [2024-10-15 01:14:16.645282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.029 [2024-10-15 01:14:16.681101] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:04.029 [2024-10-15 01:14:16.681237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.029 [2024-10-15 01:14:16.681271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.029 [2024-10-15 01:14:16.681280] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:04.029 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.029 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:04.029 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.029 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.029 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.029 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.029 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:04.029 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.029 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.029 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.029 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.029 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.029 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.029 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.029 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.029 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.029 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.029 "name": "raid_bdev1", 00:13:04.029 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:13:04.029 "strip_size_kb": 0, 00:13:04.029 "state": "online", 00:13:04.029 "raid_level": "raid1", 00:13:04.029 "superblock": true, 00:13:04.029 "num_base_bdevs": 4, 00:13:04.029 "num_base_bdevs_discovered": 2, 00:13:04.029 "num_base_bdevs_operational": 2, 00:13:04.029 "base_bdevs_list": [ 00:13:04.029 { 00:13:04.029 "name": null, 00:13:04.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.029 "is_configured": false, 00:13:04.029 "data_offset": 0, 00:13:04.029 "data_size": 63488 00:13:04.029 }, 00:13:04.029 { 00:13:04.029 "name": null, 00:13:04.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.029 "is_configured": false, 00:13:04.029 "data_offset": 2048, 00:13:04.029 "data_size": 63488 00:13:04.029 }, 00:13:04.029 { 00:13:04.029 "name": "BaseBdev3", 00:13:04.029 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:13:04.029 "is_configured": true, 00:13:04.029 "data_offset": 2048, 00:13:04.029 "data_size": 63488 00:13:04.029 }, 00:13:04.029 { 00:13:04.029 "name": "BaseBdev4", 00:13:04.029 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:13:04.029 "is_configured": true, 00:13:04.029 "data_offset": 2048, 00:13:04.029 "data_size": 63488 00:13:04.029 } 00:13:04.029 ] 00:13:04.029 }' 00:13:04.029 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.029 01:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.598 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:04.598 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.598 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:04.598 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:04.598 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.598 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.598 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.598 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.598 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.598 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.598 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.598 "name": "raid_bdev1", 00:13:04.598 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:13:04.598 "strip_size_kb": 0, 00:13:04.598 "state": "online", 00:13:04.598 "raid_level": "raid1", 00:13:04.598 "superblock": true, 00:13:04.598 "num_base_bdevs": 4, 00:13:04.598 "num_base_bdevs_discovered": 2, 00:13:04.598 "num_base_bdevs_operational": 2, 00:13:04.598 "base_bdevs_list": [ 00:13:04.598 { 00:13:04.598 "name": null, 00:13:04.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.598 "is_configured": false, 00:13:04.598 "data_offset": 0, 00:13:04.598 "data_size": 63488 00:13:04.598 }, 00:13:04.598 { 00:13:04.598 "name": null, 00:13:04.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.598 "is_configured": false, 00:13:04.598 "data_offset": 2048, 00:13:04.598 "data_size": 63488 00:13:04.598 }, 00:13:04.598 { 00:13:04.598 "name": "BaseBdev3", 00:13:04.598 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:13:04.598 "is_configured": true, 00:13:04.598 "data_offset": 2048, 00:13:04.598 "data_size": 63488 00:13:04.598 }, 00:13:04.598 { 00:13:04.598 "name": "BaseBdev4", 00:13:04.598 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:13:04.598 "is_configured": true, 00:13:04.598 "data_offset": 2048, 00:13:04.598 "data_size": 63488 00:13:04.598 } 00:13:04.598 ] 00:13:04.598 }' 00:13:04.598 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.598 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:04.598 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.598 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:04.598 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:04.598 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.598 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.858 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.858 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:04.858 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.858 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.858 [2024-10-15 01:14:17.336599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:04.858 [2024-10-15 01:14:17.336660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.858 [2024-10-15 01:14:17.336682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:13:04.858 [2024-10-15 01:14:17.336693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.858 [2024-10-15 01:14:17.337083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.858 [2024-10-15 01:14:17.337102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:04.858 [2024-10-15 01:14:17.337181] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:04.858 [2024-10-15 01:14:17.337296] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:04.858 [2024-10-15 01:14:17.337330] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:04.858 [2024-10-15 01:14:17.337356] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:04.858 BaseBdev1 00:13:04.858 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.858 01:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:05.795 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:05.795 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.795 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.795 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.795 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.795 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:05.795 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.795 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.795 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.795 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.795 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.795 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.795 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.795 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.795 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.795 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.795 "name": "raid_bdev1", 00:13:05.795 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:13:05.795 "strip_size_kb": 0, 00:13:05.795 "state": "online", 00:13:05.795 "raid_level": "raid1", 00:13:05.795 "superblock": true, 00:13:05.795 "num_base_bdevs": 4, 00:13:05.795 "num_base_bdevs_discovered": 2, 00:13:05.795 "num_base_bdevs_operational": 2, 00:13:05.795 "base_bdevs_list": [ 00:13:05.795 { 00:13:05.795 "name": null, 00:13:05.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.795 "is_configured": false, 00:13:05.795 "data_offset": 0, 00:13:05.795 "data_size": 63488 00:13:05.795 }, 00:13:05.795 { 00:13:05.795 "name": null, 00:13:05.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.795 "is_configured": false, 00:13:05.795 "data_offset": 2048, 00:13:05.795 "data_size": 63488 00:13:05.795 }, 00:13:05.795 { 00:13:05.795 "name": "BaseBdev3", 00:13:05.795 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:13:05.795 "is_configured": true, 00:13:05.795 "data_offset": 2048, 00:13:05.795 "data_size": 63488 00:13:05.795 }, 00:13:05.795 { 00:13:05.795 "name": "BaseBdev4", 00:13:05.795 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:13:05.795 "is_configured": true, 00:13:05.795 "data_offset": 2048, 00:13:05.795 "data_size": 63488 00:13:05.795 } 00:13:05.795 ] 00:13:05.796 }' 00:13:05.796 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.796 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.055 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:06.055 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.055 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:06.055 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:06.055 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.055 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.055 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.055 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.055 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.315 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.315 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.315 "name": "raid_bdev1", 00:13:06.315 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:13:06.315 "strip_size_kb": 0, 00:13:06.315 "state": "online", 00:13:06.315 "raid_level": "raid1", 00:13:06.315 "superblock": true, 00:13:06.315 "num_base_bdevs": 4, 00:13:06.315 "num_base_bdevs_discovered": 2, 00:13:06.315 "num_base_bdevs_operational": 2, 00:13:06.315 "base_bdevs_list": [ 00:13:06.315 { 00:13:06.315 "name": null, 00:13:06.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.315 "is_configured": false, 00:13:06.315 "data_offset": 0, 00:13:06.315 "data_size": 63488 00:13:06.315 }, 00:13:06.315 { 00:13:06.315 "name": null, 00:13:06.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.315 "is_configured": false, 00:13:06.315 "data_offset": 2048, 00:13:06.315 "data_size": 63488 00:13:06.315 }, 00:13:06.315 { 00:13:06.315 "name": "BaseBdev3", 00:13:06.315 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:13:06.315 "is_configured": true, 00:13:06.315 "data_offset": 2048, 00:13:06.315 "data_size": 63488 00:13:06.315 }, 00:13:06.315 { 00:13:06.315 "name": "BaseBdev4", 00:13:06.315 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:13:06.315 "is_configured": true, 00:13:06.315 "data_offset": 2048, 00:13:06.315 "data_size": 63488 00:13:06.315 } 00:13:06.315 ] 00:13:06.315 }' 00:13:06.315 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.315 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:06.315 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.315 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:06.315 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:06.315 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:06.315 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:06.315 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:06.315 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.315 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:06.315 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.315 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:06.315 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.315 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.315 [2024-10-15 01:14:18.918275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:06.315 [2024-10-15 01:14:18.918481] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:06.315 [2024-10-15 01:14:18.918503] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:06.315 request: 00:13:06.315 { 00:13:06.315 "base_bdev": "BaseBdev1", 00:13:06.315 "raid_bdev": "raid_bdev1", 00:13:06.315 "method": "bdev_raid_add_base_bdev", 00:13:06.315 "req_id": 1 00:13:06.315 } 00:13:06.315 Got JSON-RPC error response 00:13:06.315 response: 00:13:06.315 { 00:13:06.315 "code": -22, 00:13:06.315 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:06.315 } 00:13:06.315 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:06.315 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:06.315 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:06.315 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:06.315 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:06.316 01:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:07.255 01:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:07.255 01:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.255 01:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.255 01:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.255 01:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.255 01:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:07.255 01:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.255 01:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.255 01:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.255 01:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.255 01:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.255 01:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.255 01:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.255 01:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.255 01:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.515 01:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.515 "name": "raid_bdev1", 00:13:07.515 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:13:07.515 "strip_size_kb": 0, 00:13:07.515 "state": "online", 00:13:07.515 "raid_level": "raid1", 00:13:07.515 "superblock": true, 00:13:07.515 "num_base_bdevs": 4, 00:13:07.515 "num_base_bdevs_discovered": 2, 00:13:07.515 "num_base_bdevs_operational": 2, 00:13:07.515 "base_bdevs_list": [ 00:13:07.515 { 00:13:07.515 "name": null, 00:13:07.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.515 "is_configured": false, 00:13:07.515 "data_offset": 0, 00:13:07.515 "data_size": 63488 00:13:07.515 }, 00:13:07.515 { 00:13:07.515 "name": null, 00:13:07.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.515 "is_configured": false, 00:13:07.515 "data_offset": 2048, 00:13:07.515 "data_size": 63488 00:13:07.515 }, 00:13:07.515 { 00:13:07.515 "name": "BaseBdev3", 00:13:07.515 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:13:07.515 "is_configured": true, 00:13:07.515 "data_offset": 2048, 00:13:07.515 "data_size": 63488 00:13:07.515 }, 00:13:07.515 { 00:13:07.515 "name": "BaseBdev4", 00:13:07.515 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:13:07.515 "is_configured": true, 00:13:07.515 "data_offset": 2048, 00:13:07.515 "data_size": 63488 00:13:07.515 } 00:13:07.515 ] 00:13:07.516 }' 00:13:07.516 01:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.516 01:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.775 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:07.775 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.775 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:07.775 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:07.775 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.775 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.775 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.775 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.775 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.775 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.775 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.775 "name": "raid_bdev1", 00:13:07.775 "uuid": "d5fe4bab-cfa2-4a48-847c-070dde55b929", 00:13:07.775 "strip_size_kb": 0, 00:13:07.775 "state": "online", 00:13:07.775 "raid_level": "raid1", 00:13:07.775 "superblock": true, 00:13:07.775 "num_base_bdevs": 4, 00:13:07.775 "num_base_bdevs_discovered": 2, 00:13:07.775 "num_base_bdevs_operational": 2, 00:13:07.775 "base_bdevs_list": [ 00:13:07.775 { 00:13:07.775 "name": null, 00:13:07.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.775 "is_configured": false, 00:13:07.775 "data_offset": 0, 00:13:07.775 "data_size": 63488 00:13:07.775 }, 00:13:07.775 { 00:13:07.775 "name": null, 00:13:07.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.775 "is_configured": false, 00:13:07.775 "data_offset": 2048, 00:13:07.775 "data_size": 63488 00:13:07.775 }, 00:13:07.775 { 00:13:07.775 "name": "BaseBdev3", 00:13:07.775 "uuid": "5e09d8a5-35f5-5fc4-9a0b-36dcbc65def6", 00:13:07.775 "is_configured": true, 00:13:07.775 "data_offset": 2048, 00:13:07.775 "data_size": 63488 00:13:07.775 }, 00:13:07.775 { 00:13:07.775 "name": "BaseBdev4", 00:13:07.775 "uuid": "e51501e4-c034-5abb-aeff-e75324939ead", 00:13:07.775 "is_configured": true, 00:13:07.775 "data_offset": 2048, 00:13:07.775 "data_size": 63488 00:13:07.775 } 00:13:07.775 ] 00:13:07.775 }' 00:13:07.775 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.775 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:07.775 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.775 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:07.775 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89496 00:13:07.775 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 89496 ']' 00:13:07.775 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 89496 00:13:07.775 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:13:07.775 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:07.775 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89496 00:13:08.036 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:08.036 killing process with pid 89496 00:13:08.036 Received shutdown signal, test time was about 17.278270 seconds 00:13:08.036 00:13:08.036 Latency(us) 00:13:08.036 [2024-10-15T01:14:20.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.036 [2024-10-15T01:14:20.760Z] =================================================================================================================== 00:13:08.036 [2024-10-15T01:14:20.760Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:08.036 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:08.036 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89496' 00:13:08.036 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 89496 00:13:08.036 [2024-10-15 01:14:20.511631] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:08.036 [2024-10-15 01:14:20.511761] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.036 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 89496 00:13:08.036 [2024-10-15 01:14:20.511875] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.036 [2024-10-15 01:14:20.511886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:13:08.036 [2024-10-15 01:14:20.556200] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:08.036 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:08.036 00:13:08.036 real 0m19.196s 00:13:08.036 user 0m25.530s 00:13:08.036 sys 0m2.368s 00:13:08.036 ************************************ 00:13:08.036 END TEST raid_rebuild_test_sb_io 00:13:08.036 ************************************ 00:13:08.036 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:08.036 01:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.297 01:14:20 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:08.297 01:14:20 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:13:08.297 01:14:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:08.297 01:14:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:08.297 01:14:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:08.297 ************************************ 00:13:08.297 START TEST raid5f_state_function_test 00:13:08.297 ************************************ 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:08.297 Process raid pid: 90200 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90200 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90200' 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90200 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 90200 ']' 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:08.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:08.297 01:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.297 [2024-10-15 01:14:20.927221] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:13:08.297 [2024-10-15 01:14:20.927425] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.557 [2024-10-15 01:14:21.074087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.557 [2024-10-15 01:14:21.100831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.557 [2024-10-15 01:14:21.142899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.557 [2024-10-15 01:14:21.142933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.127 [2024-10-15 01:14:21.744667] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:09.127 [2024-10-15 01:14:21.744772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:09.127 [2024-10-15 01:14:21.744794] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:09.127 [2024-10-15 01:14:21.744807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:09.127 [2024-10-15 01:14:21.744814] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:09.127 [2024-10-15 01:14:21.744825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.127 01:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.127 "name": "Existed_Raid", 00:13:09.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.127 "strip_size_kb": 64, 00:13:09.127 "state": "configuring", 00:13:09.127 "raid_level": "raid5f", 00:13:09.127 "superblock": false, 00:13:09.127 "num_base_bdevs": 3, 00:13:09.127 "num_base_bdevs_discovered": 0, 00:13:09.127 "num_base_bdevs_operational": 3, 00:13:09.128 "base_bdevs_list": [ 00:13:09.128 { 00:13:09.128 "name": "BaseBdev1", 00:13:09.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.128 "is_configured": false, 00:13:09.128 "data_offset": 0, 00:13:09.128 "data_size": 0 00:13:09.128 }, 00:13:09.128 { 00:13:09.128 "name": "BaseBdev2", 00:13:09.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.128 "is_configured": false, 00:13:09.128 "data_offset": 0, 00:13:09.128 "data_size": 0 00:13:09.128 }, 00:13:09.128 { 00:13:09.128 "name": "BaseBdev3", 00:13:09.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.128 "is_configured": false, 00:13:09.128 "data_offset": 0, 00:13:09.128 "data_size": 0 00:13:09.128 } 00:13:09.128 ] 00:13:09.128 }' 00:13:09.128 01:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.128 01:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.698 [2024-10-15 01:14:22.183853] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:09.698 [2024-10-15 01:14:22.183928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.698 [2024-10-15 01:14:22.195869] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:09.698 [2024-10-15 01:14:22.195946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:09.698 [2024-10-15 01:14:22.195972] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:09.698 [2024-10-15 01:14:22.195994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:09.698 [2024-10-15 01:14:22.196011] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:09.698 [2024-10-15 01:14:22.196031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.698 [2024-10-15 01:14:22.216681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.698 BaseBdev1 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.698 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.698 [ 00:13:09.698 { 00:13:09.698 "name": "BaseBdev1", 00:13:09.698 "aliases": [ 00:13:09.698 "9635da38-f2a1-4152-814e-6bcdb5c9a554" 00:13:09.698 ], 00:13:09.698 "product_name": "Malloc disk", 00:13:09.698 "block_size": 512, 00:13:09.698 "num_blocks": 65536, 00:13:09.698 "uuid": "9635da38-f2a1-4152-814e-6bcdb5c9a554", 00:13:09.698 "assigned_rate_limits": { 00:13:09.698 "rw_ios_per_sec": 0, 00:13:09.698 "rw_mbytes_per_sec": 0, 00:13:09.698 "r_mbytes_per_sec": 0, 00:13:09.698 "w_mbytes_per_sec": 0 00:13:09.698 }, 00:13:09.698 "claimed": true, 00:13:09.698 "claim_type": "exclusive_write", 00:13:09.698 "zoned": false, 00:13:09.698 "supported_io_types": { 00:13:09.698 "read": true, 00:13:09.698 "write": true, 00:13:09.698 "unmap": true, 00:13:09.698 "flush": true, 00:13:09.698 "reset": true, 00:13:09.698 "nvme_admin": false, 00:13:09.699 "nvme_io": false, 00:13:09.699 "nvme_io_md": false, 00:13:09.699 "write_zeroes": true, 00:13:09.699 "zcopy": true, 00:13:09.699 "get_zone_info": false, 00:13:09.699 "zone_management": false, 00:13:09.699 "zone_append": false, 00:13:09.699 "compare": false, 00:13:09.699 "compare_and_write": false, 00:13:09.699 "abort": true, 00:13:09.699 "seek_hole": false, 00:13:09.699 "seek_data": false, 00:13:09.699 "copy": true, 00:13:09.699 "nvme_iov_md": false 00:13:09.699 }, 00:13:09.699 "memory_domains": [ 00:13:09.699 { 00:13:09.699 "dma_device_id": "system", 00:13:09.699 "dma_device_type": 1 00:13:09.699 }, 00:13:09.699 { 00:13:09.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.699 "dma_device_type": 2 00:13:09.699 } 00:13:09.699 ], 00:13:09.699 "driver_specific": {} 00:13:09.699 } 00:13:09.699 ] 00:13:09.699 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.699 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:09.699 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:09.699 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.699 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.699 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:09.699 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.699 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:09.699 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.699 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.699 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.699 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.699 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.699 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.699 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.699 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.699 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.699 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.699 "name": "Existed_Raid", 00:13:09.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.699 "strip_size_kb": 64, 00:13:09.699 "state": "configuring", 00:13:09.699 "raid_level": "raid5f", 00:13:09.699 "superblock": false, 00:13:09.699 "num_base_bdevs": 3, 00:13:09.699 "num_base_bdevs_discovered": 1, 00:13:09.699 "num_base_bdevs_operational": 3, 00:13:09.699 "base_bdevs_list": [ 00:13:09.699 { 00:13:09.699 "name": "BaseBdev1", 00:13:09.699 "uuid": "9635da38-f2a1-4152-814e-6bcdb5c9a554", 00:13:09.699 "is_configured": true, 00:13:09.699 "data_offset": 0, 00:13:09.699 "data_size": 65536 00:13:09.699 }, 00:13:09.699 { 00:13:09.699 "name": "BaseBdev2", 00:13:09.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.699 "is_configured": false, 00:13:09.699 "data_offset": 0, 00:13:09.699 "data_size": 0 00:13:09.699 }, 00:13:09.699 { 00:13:09.699 "name": "BaseBdev3", 00:13:09.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.699 "is_configured": false, 00:13:09.699 "data_offset": 0, 00:13:09.699 "data_size": 0 00:13:09.699 } 00:13:09.699 ] 00:13:09.699 }' 00:13:09.699 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.699 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.959 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:09.959 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.959 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.959 [2024-10-15 01:14:22.667952] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:09.959 [2024-10-15 01:14:22.667998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:13:09.959 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.959 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:09.959 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.959 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.959 [2024-10-15 01:14:22.675999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.959 [2024-10-15 01:14:22.677821] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:09.959 [2024-10-15 01:14:22.677865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:09.959 [2024-10-15 01:14:22.677875] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:09.959 [2024-10-15 01:14:22.677885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:09.959 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.959 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:09.959 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:09.960 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:09.960 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.960 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.960 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:09.960 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.960 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.219 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.219 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.219 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.219 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.219 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.219 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.219 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.219 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.220 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.220 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.220 "name": "Existed_Raid", 00:13:10.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.220 "strip_size_kb": 64, 00:13:10.220 "state": "configuring", 00:13:10.220 "raid_level": "raid5f", 00:13:10.220 "superblock": false, 00:13:10.220 "num_base_bdevs": 3, 00:13:10.220 "num_base_bdevs_discovered": 1, 00:13:10.220 "num_base_bdevs_operational": 3, 00:13:10.220 "base_bdevs_list": [ 00:13:10.220 { 00:13:10.220 "name": "BaseBdev1", 00:13:10.220 "uuid": "9635da38-f2a1-4152-814e-6bcdb5c9a554", 00:13:10.220 "is_configured": true, 00:13:10.220 "data_offset": 0, 00:13:10.220 "data_size": 65536 00:13:10.220 }, 00:13:10.220 { 00:13:10.220 "name": "BaseBdev2", 00:13:10.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.220 "is_configured": false, 00:13:10.220 "data_offset": 0, 00:13:10.220 "data_size": 0 00:13:10.220 }, 00:13:10.220 { 00:13:10.220 "name": "BaseBdev3", 00:13:10.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.220 "is_configured": false, 00:13:10.220 "data_offset": 0, 00:13:10.220 "data_size": 0 00:13:10.220 } 00:13:10.220 ] 00:13:10.220 }' 00:13:10.220 01:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.220 01:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.480 [2024-10-15 01:14:23.114362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:10.480 BaseBdev2 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.480 [ 00:13:10.480 { 00:13:10.480 "name": "BaseBdev2", 00:13:10.480 "aliases": [ 00:13:10.480 "8df75916-a810-475e-a0b4-7401fd1604e0" 00:13:10.480 ], 00:13:10.480 "product_name": "Malloc disk", 00:13:10.480 "block_size": 512, 00:13:10.480 "num_blocks": 65536, 00:13:10.480 "uuid": "8df75916-a810-475e-a0b4-7401fd1604e0", 00:13:10.480 "assigned_rate_limits": { 00:13:10.480 "rw_ios_per_sec": 0, 00:13:10.480 "rw_mbytes_per_sec": 0, 00:13:10.480 "r_mbytes_per_sec": 0, 00:13:10.480 "w_mbytes_per_sec": 0 00:13:10.480 }, 00:13:10.480 "claimed": true, 00:13:10.480 "claim_type": "exclusive_write", 00:13:10.480 "zoned": false, 00:13:10.480 "supported_io_types": { 00:13:10.480 "read": true, 00:13:10.480 "write": true, 00:13:10.480 "unmap": true, 00:13:10.480 "flush": true, 00:13:10.480 "reset": true, 00:13:10.480 "nvme_admin": false, 00:13:10.480 "nvme_io": false, 00:13:10.480 "nvme_io_md": false, 00:13:10.480 "write_zeroes": true, 00:13:10.480 "zcopy": true, 00:13:10.480 "get_zone_info": false, 00:13:10.480 "zone_management": false, 00:13:10.480 "zone_append": false, 00:13:10.480 "compare": false, 00:13:10.480 "compare_and_write": false, 00:13:10.480 "abort": true, 00:13:10.480 "seek_hole": false, 00:13:10.480 "seek_data": false, 00:13:10.480 "copy": true, 00:13:10.480 "nvme_iov_md": false 00:13:10.480 }, 00:13:10.480 "memory_domains": [ 00:13:10.480 { 00:13:10.480 "dma_device_id": "system", 00:13:10.480 "dma_device_type": 1 00:13:10.480 }, 00:13:10.480 { 00:13:10.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.480 "dma_device_type": 2 00:13:10.480 } 00:13:10.480 ], 00:13:10.480 "driver_specific": {} 00:13:10.480 } 00:13:10.480 ] 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.480 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.481 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.481 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.481 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.481 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.481 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.740 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.740 "name": "Existed_Raid", 00:13:10.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.740 "strip_size_kb": 64, 00:13:10.740 "state": "configuring", 00:13:10.740 "raid_level": "raid5f", 00:13:10.740 "superblock": false, 00:13:10.740 "num_base_bdevs": 3, 00:13:10.740 "num_base_bdevs_discovered": 2, 00:13:10.740 "num_base_bdevs_operational": 3, 00:13:10.740 "base_bdevs_list": [ 00:13:10.740 { 00:13:10.740 "name": "BaseBdev1", 00:13:10.740 "uuid": "9635da38-f2a1-4152-814e-6bcdb5c9a554", 00:13:10.740 "is_configured": true, 00:13:10.740 "data_offset": 0, 00:13:10.740 "data_size": 65536 00:13:10.740 }, 00:13:10.740 { 00:13:10.740 "name": "BaseBdev2", 00:13:10.740 "uuid": "8df75916-a810-475e-a0b4-7401fd1604e0", 00:13:10.740 "is_configured": true, 00:13:10.740 "data_offset": 0, 00:13:10.740 "data_size": 65536 00:13:10.740 }, 00:13:10.740 { 00:13:10.740 "name": "BaseBdev3", 00:13:10.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.740 "is_configured": false, 00:13:10.740 "data_offset": 0, 00:13:10.740 "data_size": 0 00:13:10.740 } 00:13:10.740 ] 00:13:10.740 }' 00:13:10.740 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.740 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.001 [2024-10-15 01:14:23.595723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:11.001 [2024-10-15 01:14:23.595805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:11.001 [2024-10-15 01:14:23.595821] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:11.001 [2024-10-15 01:14:23.596198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:13:11.001 [2024-10-15 01:14:23.596754] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:11.001 [2024-10-15 01:14:23.596777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:13:11.001 [2024-10-15 01:14:23.597044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.001 BaseBdev3 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.001 [ 00:13:11.001 { 00:13:11.001 "name": "BaseBdev3", 00:13:11.001 "aliases": [ 00:13:11.001 "5556c2e9-8e24-494e-a944-b4a2638a8c58" 00:13:11.001 ], 00:13:11.001 "product_name": "Malloc disk", 00:13:11.001 "block_size": 512, 00:13:11.001 "num_blocks": 65536, 00:13:11.001 "uuid": "5556c2e9-8e24-494e-a944-b4a2638a8c58", 00:13:11.001 "assigned_rate_limits": { 00:13:11.001 "rw_ios_per_sec": 0, 00:13:11.001 "rw_mbytes_per_sec": 0, 00:13:11.001 "r_mbytes_per_sec": 0, 00:13:11.001 "w_mbytes_per_sec": 0 00:13:11.001 }, 00:13:11.001 "claimed": true, 00:13:11.001 "claim_type": "exclusive_write", 00:13:11.001 "zoned": false, 00:13:11.001 "supported_io_types": { 00:13:11.001 "read": true, 00:13:11.001 "write": true, 00:13:11.001 "unmap": true, 00:13:11.001 "flush": true, 00:13:11.001 "reset": true, 00:13:11.001 "nvme_admin": false, 00:13:11.001 "nvme_io": false, 00:13:11.001 "nvme_io_md": false, 00:13:11.001 "write_zeroes": true, 00:13:11.001 "zcopy": true, 00:13:11.001 "get_zone_info": false, 00:13:11.001 "zone_management": false, 00:13:11.001 "zone_append": false, 00:13:11.001 "compare": false, 00:13:11.001 "compare_and_write": false, 00:13:11.001 "abort": true, 00:13:11.001 "seek_hole": false, 00:13:11.001 "seek_data": false, 00:13:11.001 "copy": true, 00:13:11.001 "nvme_iov_md": false 00:13:11.001 }, 00:13:11.001 "memory_domains": [ 00:13:11.001 { 00:13:11.001 "dma_device_id": "system", 00:13:11.001 "dma_device_type": 1 00:13:11.001 }, 00:13:11.001 { 00:13:11.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.001 "dma_device_type": 2 00:13:11.001 } 00:13:11.001 ], 00:13:11.001 "driver_specific": {} 00:13:11.001 } 00:13:11.001 ] 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.001 "name": "Existed_Raid", 00:13:11.001 "uuid": "c43f6878-8a15-4375-9da9-a0c94745b800", 00:13:11.001 "strip_size_kb": 64, 00:13:11.001 "state": "online", 00:13:11.001 "raid_level": "raid5f", 00:13:11.001 "superblock": false, 00:13:11.001 "num_base_bdevs": 3, 00:13:11.001 "num_base_bdevs_discovered": 3, 00:13:11.001 "num_base_bdevs_operational": 3, 00:13:11.001 "base_bdevs_list": [ 00:13:11.001 { 00:13:11.001 "name": "BaseBdev1", 00:13:11.001 "uuid": "9635da38-f2a1-4152-814e-6bcdb5c9a554", 00:13:11.001 "is_configured": true, 00:13:11.001 "data_offset": 0, 00:13:11.001 "data_size": 65536 00:13:11.001 }, 00:13:11.001 { 00:13:11.001 "name": "BaseBdev2", 00:13:11.001 "uuid": "8df75916-a810-475e-a0b4-7401fd1604e0", 00:13:11.001 "is_configured": true, 00:13:11.001 "data_offset": 0, 00:13:11.001 "data_size": 65536 00:13:11.001 }, 00:13:11.001 { 00:13:11.001 "name": "BaseBdev3", 00:13:11.001 "uuid": "5556c2e9-8e24-494e-a944-b4a2638a8c58", 00:13:11.001 "is_configured": true, 00:13:11.001 "data_offset": 0, 00:13:11.001 "data_size": 65536 00:13:11.001 } 00:13:11.001 ] 00:13:11.001 }' 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.001 01:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.571 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:11.571 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:11.571 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:11.571 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:11.571 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:11.571 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:11.571 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:11.571 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.571 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.571 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:11.571 [2024-10-15 01:14:24.107056] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.571 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.571 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:11.571 "name": "Existed_Raid", 00:13:11.571 "aliases": [ 00:13:11.571 "c43f6878-8a15-4375-9da9-a0c94745b800" 00:13:11.571 ], 00:13:11.571 "product_name": "Raid Volume", 00:13:11.571 "block_size": 512, 00:13:11.571 "num_blocks": 131072, 00:13:11.571 "uuid": "c43f6878-8a15-4375-9da9-a0c94745b800", 00:13:11.571 "assigned_rate_limits": { 00:13:11.571 "rw_ios_per_sec": 0, 00:13:11.571 "rw_mbytes_per_sec": 0, 00:13:11.571 "r_mbytes_per_sec": 0, 00:13:11.571 "w_mbytes_per_sec": 0 00:13:11.571 }, 00:13:11.571 "claimed": false, 00:13:11.571 "zoned": false, 00:13:11.571 "supported_io_types": { 00:13:11.571 "read": true, 00:13:11.571 "write": true, 00:13:11.571 "unmap": false, 00:13:11.571 "flush": false, 00:13:11.571 "reset": true, 00:13:11.571 "nvme_admin": false, 00:13:11.571 "nvme_io": false, 00:13:11.571 "nvme_io_md": false, 00:13:11.571 "write_zeroes": true, 00:13:11.571 "zcopy": false, 00:13:11.571 "get_zone_info": false, 00:13:11.571 "zone_management": false, 00:13:11.571 "zone_append": false, 00:13:11.571 "compare": false, 00:13:11.571 "compare_and_write": false, 00:13:11.571 "abort": false, 00:13:11.571 "seek_hole": false, 00:13:11.571 "seek_data": false, 00:13:11.571 "copy": false, 00:13:11.571 "nvme_iov_md": false 00:13:11.571 }, 00:13:11.571 "driver_specific": { 00:13:11.571 "raid": { 00:13:11.571 "uuid": "c43f6878-8a15-4375-9da9-a0c94745b800", 00:13:11.571 "strip_size_kb": 64, 00:13:11.571 "state": "online", 00:13:11.571 "raid_level": "raid5f", 00:13:11.571 "superblock": false, 00:13:11.571 "num_base_bdevs": 3, 00:13:11.571 "num_base_bdevs_discovered": 3, 00:13:11.571 "num_base_bdevs_operational": 3, 00:13:11.571 "base_bdevs_list": [ 00:13:11.571 { 00:13:11.571 "name": "BaseBdev1", 00:13:11.571 "uuid": "9635da38-f2a1-4152-814e-6bcdb5c9a554", 00:13:11.571 "is_configured": true, 00:13:11.571 "data_offset": 0, 00:13:11.571 "data_size": 65536 00:13:11.571 }, 00:13:11.571 { 00:13:11.571 "name": "BaseBdev2", 00:13:11.571 "uuid": "8df75916-a810-475e-a0b4-7401fd1604e0", 00:13:11.571 "is_configured": true, 00:13:11.571 "data_offset": 0, 00:13:11.571 "data_size": 65536 00:13:11.571 }, 00:13:11.571 { 00:13:11.571 "name": "BaseBdev3", 00:13:11.571 "uuid": "5556c2e9-8e24-494e-a944-b4a2638a8c58", 00:13:11.571 "is_configured": true, 00:13:11.571 "data_offset": 0, 00:13:11.571 "data_size": 65536 00:13:11.571 } 00:13:11.571 ] 00:13:11.571 } 00:13:11.571 } 00:13:11.571 }' 00:13:11.571 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:11.571 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:11.571 BaseBdev2 00:13:11.571 BaseBdev3' 00:13:11.572 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.572 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:11.572 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.572 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:11.572 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.572 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.572 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.572 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.572 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.572 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.572 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.572 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:11.572 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.572 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.572 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.832 [2024-10-15 01:14:24.398372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.832 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.832 "name": "Existed_Raid", 00:13:11.832 "uuid": "c43f6878-8a15-4375-9da9-a0c94745b800", 00:13:11.832 "strip_size_kb": 64, 00:13:11.832 "state": "online", 00:13:11.832 "raid_level": "raid5f", 00:13:11.832 "superblock": false, 00:13:11.832 "num_base_bdevs": 3, 00:13:11.832 "num_base_bdevs_discovered": 2, 00:13:11.832 "num_base_bdevs_operational": 2, 00:13:11.832 "base_bdevs_list": [ 00:13:11.832 { 00:13:11.832 "name": null, 00:13:11.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.832 "is_configured": false, 00:13:11.832 "data_offset": 0, 00:13:11.832 "data_size": 65536 00:13:11.832 }, 00:13:11.832 { 00:13:11.832 "name": "BaseBdev2", 00:13:11.832 "uuid": "8df75916-a810-475e-a0b4-7401fd1604e0", 00:13:11.832 "is_configured": true, 00:13:11.832 "data_offset": 0, 00:13:11.832 "data_size": 65536 00:13:11.832 }, 00:13:11.832 { 00:13:11.832 "name": "BaseBdev3", 00:13:11.832 "uuid": "5556c2e9-8e24-494e-a944-b4a2638a8c58", 00:13:11.833 "is_configured": true, 00:13:11.833 "data_offset": 0, 00:13:11.833 "data_size": 65536 00:13:11.833 } 00:13:11.833 ] 00:13:11.833 }' 00:13:11.833 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.833 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.403 [2024-10-15 01:14:24.936648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:12.403 [2024-10-15 01:14:24.936788] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:12.403 [2024-10-15 01:14:24.947742] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.403 01:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.403 [2024-10-15 01:14:25.007660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:12.403 [2024-10-15 01:14:25.007707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.403 BaseBdev2 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.403 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.403 [ 00:13:12.403 { 00:13:12.403 "name": "BaseBdev2", 00:13:12.403 "aliases": [ 00:13:12.403 "7ccc92d4-3e1d-419f-b316-e73f7c424a24" 00:13:12.403 ], 00:13:12.403 "product_name": "Malloc disk", 00:13:12.403 "block_size": 512, 00:13:12.403 "num_blocks": 65536, 00:13:12.403 "uuid": "7ccc92d4-3e1d-419f-b316-e73f7c424a24", 00:13:12.403 "assigned_rate_limits": { 00:13:12.403 "rw_ios_per_sec": 0, 00:13:12.403 "rw_mbytes_per_sec": 0, 00:13:12.403 "r_mbytes_per_sec": 0, 00:13:12.403 "w_mbytes_per_sec": 0 00:13:12.403 }, 00:13:12.403 "claimed": false, 00:13:12.403 "zoned": false, 00:13:12.403 "supported_io_types": { 00:13:12.403 "read": true, 00:13:12.403 "write": true, 00:13:12.403 "unmap": true, 00:13:12.403 "flush": true, 00:13:12.403 "reset": true, 00:13:12.403 "nvme_admin": false, 00:13:12.403 "nvme_io": false, 00:13:12.403 "nvme_io_md": false, 00:13:12.403 "write_zeroes": true, 00:13:12.403 "zcopy": true, 00:13:12.403 "get_zone_info": false, 00:13:12.403 "zone_management": false, 00:13:12.403 "zone_append": false, 00:13:12.403 "compare": false, 00:13:12.403 "compare_and_write": false, 00:13:12.403 "abort": true, 00:13:12.403 "seek_hole": false, 00:13:12.403 "seek_data": false, 00:13:12.403 "copy": true, 00:13:12.404 "nvme_iov_md": false 00:13:12.404 }, 00:13:12.404 "memory_domains": [ 00:13:12.404 { 00:13:12.404 "dma_device_id": "system", 00:13:12.404 "dma_device_type": 1 00:13:12.404 }, 00:13:12.404 { 00:13:12.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.404 "dma_device_type": 2 00:13:12.404 } 00:13:12.404 ], 00:13:12.404 "driver_specific": {} 00:13:12.404 } 00:13:12.404 ] 00:13:12.404 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.404 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:12.404 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:12.404 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:12.404 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:12.404 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.404 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.664 BaseBdev3 00:13:12.664 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.664 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:12.664 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:12.664 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:12.664 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:12.664 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:12.664 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:12.664 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:12.664 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.664 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.664 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.664 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:12.664 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.664 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.664 [ 00:13:12.664 { 00:13:12.664 "name": "BaseBdev3", 00:13:12.664 "aliases": [ 00:13:12.664 "d3ecc73c-8b7a-4045-a3db-d905e7ab7ede" 00:13:12.664 ], 00:13:12.664 "product_name": "Malloc disk", 00:13:12.664 "block_size": 512, 00:13:12.664 "num_blocks": 65536, 00:13:12.664 "uuid": "d3ecc73c-8b7a-4045-a3db-d905e7ab7ede", 00:13:12.664 "assigned_rate_limits": { 00:13:12.664 "rw_ios_per_sec": 0, 00:13:12.664 "rw_mbytes_per_sec": 0, 00:13:12.664 "r_mbytes_per_sec": 0, 00:13:12.664 "w_mbytes_per_sec": 0 00:13:12.664 }, 00:13:12.664 "claimed": false, 00:13:12.664 "zoned": false, 00:13:12.664 "supported_io_types": { 00:13:12.664 "read": true, 00:13:12.664 "write": true, 00:13:12.664 "unmap": true, 00:13:12.664 "flush": true, 00:13:12.664 "reset": true, 00:13:12.664 "nvme_admin": false, 00:13:12.665 "nvme_io": false, 00:13:12.665 "nvme_io_md": false, 00:13:12.665 "write_zeroes": true, 00:13:12.665 "zcopy": true, 00:13:12.665 "get_zone_info": false, 00:13:12.665 "zone_management": false, 00:13:12.665 "zone_append": false, 00:13:12.665 "compare": false, 00:13:12.665 "compare_and_write": false, 00:13:12.665 "abort": true, 00:13:12.665 "seek_hole": false, 00:13:12.665 "seek_data": false, 00:13:12.665 "copy": true, 00:13:12.665 "nvme_iov_md": false 00:13:12.665 }, 00:13:12.665 "memory_domains": [ 00:13:12.665 { 00:13:12.665 "dma_device_id": "system", 00:13:12.665 "dma_device_type": 1 00:13:12.665 }, 00:13:12.665 { 00:13:12.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.665 "dma_device_type": 2 00:13:12.665 } 00:13:12.665 ], 00:13:12.665 "driver_specific": {} 00:13:12.665 } 00:13:12.665 ] 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.665 [2024-10-15 01:14:25.174574] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:12.665 [2024-10-15 01:14:25.174666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:12.665 [2024-10-15 01:14:25.174704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:12.665 [2024-10-15 01:14:25.176483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.665 "name": "Existed_Raid", 00:13:12.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.665 "strip_size_kb": 64, 00:13:12.665 "state": "configuring", 00:13:12.665 "raid_level": "raid5f", 00:13:12.665 "superblock": false, 00:13:12.665 "num_base_bdevs": 3, 00:13:12.665 "num_base_bdevs_discovered": 2, 00:13:12.665 "num_base_bdevs_operational": 3, 00:13:12.665 "base_bdevs_list": [ 00:13:12.665 { 00:13:12.665 "name": "BaseBdev1", 00:13:12.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.665 "is_configured": false, 00:13:12.665 "data_offset": 0, 00:13:12.665 "data_size": 0 00:13:12.665 }, 00:13:12.665 { 00:13:12.665 "name": "BaseBdev2", 00:13:12.665 "uuid": "7ccc92d4-3e1d-419f-b316-e73f7c424a24", 00:13:12.665 "is_configured": true, 00:13:12.665 "data_offset": 0, 00:13:12.665 "data_size": 65536 00:13:12.665 }, 00:13:12.665 { 00:13:12.665 "name": "BaseBdev3", 00:13:12.665 "uuid": "d3ecc73c-8b7a-4045-a3db-d905e7ab7ede", 00:13:12.665 "is_configured": true, 00:13:12.665 "data_offset": 0, 00:13:12.665 "data_size": 65536 00:13:12.665 } 00:13:12.665 ] 00:13:12.665 }' 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.665 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.925 [2024-10-15 01:14:25.589823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.925 "name": "Existed_Raid", 00:13:12.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.925 "strip_size_kb": 64, 00:13:12.925 "state": "configuring", 00:13:12.925 "raid_level": "raid5f", 00:13:12.925 "superblock": false, 00:13:12.925 "num_base_bdevs": 3, 00:13:12.925 "num_base_bdevs_discovered": 1, 00:13:12.925 "num_base_bdevs_operational": 3, 00:13:12.925 "base_bdevs_list": [ 00:13:12.925 { 00:13:12.925 "name": "BaseBdev1", 00:13:12.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.925 "is_configured": false, 00:13:12.925 "data_offset": 0, 00:13:12.925 "data_size": 0 00:13:12.925 }, 00:13:12.925 { 00:13:12.925 "name": null, 00:13:12.925 "uuid": "7ccc92d4-3e1d-419f-b316-e73f7c424a24", 00:13:12.925 "is_configured": false, 00:13:12.925 "data_offset": 0, 00:13:12.925 "data_size": 65536 00:13:12.925 }, 00:13:12.925 { 00:13:12.925 "name": "BaseBdev3", 00:13:12.925 "uuid": "d3ecc73c-8b7a-4045-a3db-d905e7ab7ede", 00:13:12.925 "is_configured": true, 00:13:12.925 "data_offset": 0, 00:13:12.925 "data_size": 65536 00:13:12.925 } 00:13:12.925 ] 00:13:12.925 }' 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.925 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.496 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.496 01:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:13.496 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.496 01:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.496 [2024-10-15 01:14:26.044080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.496 BaseBdev1 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.496 [ 00:13:13.496 { 00:13:13.496 "name": "BaseBdev1", 00:13:13.496 "aliases": [ 00:13:13.496 "5675f71c-356f-480a-90e9-41879a88fc3b" 00:13:13.496 ], 00:13:13.496 "product_name": "Malloc disk", 00:13:13.496 "block_size": 512, 00:13:13.496 "num_blocks": 65536, 00:13:13.496 "uuid": "5675f71c-356f-480a-90e9-41879a88fc3b", 00:13:13.496 "assigned_rate_limits": { 00:13:13.496 "rw_ios_per_sec": 0, 00:13:13.496 "rw_mbytes_per_sec": 0, 00:13:13.496 "r_mbytes_per_sec": 0, 00:13:13.496 "w_mbytes_per_sec": 0 00:13:13.496 }, 00:13:13.496 "claimed": true, 00:13:13.496 "claim_type": "exclusive_write", 00:13:13.496 "zoned": false, 00:13:13.496 "supported_io_types": { 00:13:13.496 "read": true, 00:13:13.496 "write": true, 00:13:13.496 "unmap": true, 00:13:13.496 "flush": true, 00:13:13.496 "reset": true, 00:13:13.496 "nvme_admin": false, 00:13:13.496 "nvme_io": false, 00:13:13.496 "nvme_io_md": false, 00:13:13.496 "write_zeroes": true, 00:13:13.496 "zcopy": true, 00:13:13.496 "get_zone_info": false, 00:13:13.496 "zone_management": false, 00:13:13.496 "zone_append": false, 00:13:13.496 "compare": false, 00:13:13.496 "compare_and_write": false, 00:13:13.496 "abort": true, 00:13:13.496 "seek_hole": false, 00:13:13.496 "seek_data": false, 00:13:13.496 "copy": true, 00:13:13.496 "nvme_iov_md": false 00:13:13.496 }, 00:13:13.496 "memory_domains": [ 00:13:13.496 { 00:13:13.496 "dma_device_id": "system", 00:13:13.496 "dma_device_type": 1 00:13:13.496 }, 00:13:13.496 { 00:13:13.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.496 "dma_device_type": 2 00:13:13.496 } 00:13:13.496 ], 00:13:13.496 "driver_specific": {} 00:13:13.496 } 00:13:13.496 ] 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.496 "name": "Existed_Raid", 00:13:13.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.496 "strip_size_kb": 64, 00:13:13.496 "state": "configuring", 00:13:13.496 "raid_level": "raid5f", 00:13:13.496 "superblock": false, 00:13:13.496 "num_base_bdevs": 3, 00:13:13.496 "num_base_bdevs_discovered": 2, 00:13:13.496 "num_base_bdevs_operational": 3, 00:13:13.496 "base_bdevs_list": [ 00:13:13.496 { 00:13:13.496 "name": "BaseBdev1", 00:13:13.496 "uuid": "5675f71c-356f-480a-90e9-41879a88fc3b", 00:13:13.496 "is_configured": true, 00:13:13.496 "data_offset": 0, 00:13:13.496 "data_size": 65536 00:13:13.496 }, 00:13:13.496 { 00:13:13.496 "name": null, 00:13:13.496 "uuid": "7ccc92d4-3e1d-419f-b316-e73f7c424a24", 00:13:13.496 "is_configured": false, 00:13:13.496 "data_offset": 0, 00:13:13.496 "data_size": 65536 00:13:13.496 }, 00:13:13.496 { 00:13:13.496 "name": "BaseBdev3", 00:13:13.496 "uuid": "d3ecc73c-8b7a-4045-a3db-d905e7ab7ede", 00:13:13.496 "is_configured": true, 00:13:13.496 "data_offset": 0, 00:13:13.496 "data_size": 65536 00:13:13.496 } 00:13:13.496 ] 00:13:13.496 }' 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.496 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.066 [2024-10-15 01:14:26.551288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.066 "name": "Existed_Raid", 00:13:14.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.066 "strip_size_kb": 64, 00:13:14.066 "state": "configuring", 00:13:14.066 "raid_level": "raid5f", 00:13:14.066 "superblock": false, 00:13:14.066 "num_base_bdevs": 3, 00:13:14.066 "num_base_bdevs_discovered": 1, 00:13:14.066 "num_base_bdevs_operational": 3, 00:13:14.066 "base_bdevs_list": [ 00:13:14.066 { 00:13:14.066 "name": "BaseBdev1", 00:13:14.066 "uuid": "5675f71c-356f-480a-90e9-41879a88fc3b", 00:13:14.066 "is_configured": true, 00:13:14.066 "data_offset": 0, 00:13:14.066 "data_size": 65536 00:13:14.066 }, 00:13:14.066 { 00:13:14.066 "name": null, 00:13:14.066 "uuid": "7ccc92d4-3e1d-419f-b316-e73f7c424a24", 00:13:14.066 "is_configured": false, 00:13:14.066 "data_offset": 0, 00:13:14.066 "data_size": 65536 00:13:14.066 }, 00:13:14.066 { 00:13:14.066 "name": null, 00:13:14.066 "uuid": "d3ecc73c-8b7a-4045-a3db-d905e7ab7ede", 00:13:14.066 "is_configured": false, 00:13:14.066 "data_offset": 0, 00:13:14.066 "data_size": 65536 00:13:14.066 } 00:13:14.066 ] 00:13:14.066 }' 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.066 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.324 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.324 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.324 01:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.324 01:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:14.324 01:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.324 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:14.324 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:14.324 01:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.324 01:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.324 [2024-10-15 01:14:27.042463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:14.325 01:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.325 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:14.325 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.325 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.584 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:14.584 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.584 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.584 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.584 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.584 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.584 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.584 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.584 01:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.584 01:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.584 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.584 01:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.584 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.584 "name": "Existed_Raid", 00:13:14.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.584 "strip_size_kb": 64, 00:13:14.584 "state": "configuring", 00:13:14.584 "raid_level": "raid5f", 00:13:14.584 "superblock": false, 00:13:14.584 "num_base_bdevs": 3, 00:13:14.584 "num_base_bdevs_discovered": 2, 00:13:14.584 "num_base_bdevs_operational": 3, 00:13:14.584 "base_bdevs_list": [ 00:13:14.584 { 00:13:14.584 "name": "BaseBdev1", 00:13:14.584 "uuid": "5675f71c-356f-480a-90e9-41879a88fc3b", 00:13:14.584 "is_configured": true, 00:13:14.584 "data_offset": 0, 00:13:14.584 "data_size": 65536 00:13:14.584 }, 00:13:14.584 { 00:13:14.584 "name": null, 00:13:14.584 "uuid": "7ccc92d4-3e1d-419f-b316-e73f7c424a24", 00:13:14.584 "is_configured": false, 00:13:14.584 "data_offset": 0, 00:13:14.584 "data_size": 65536 00:13:14.584 }, 00:13:14.584 { 00:13:14.584 "name": "BaseBdev3", 00:13:14.584 "uuid": "d3ecc73c-8b7a-4045-a3db-d905e7ab7ede", 00:13:14.584 "is_configured": true, 00:13:14.584 "data_offset": 0, 00:13:14.584 "data_size": 65536 00:13:14.584 } 00:13:14.584 ] 00:13:14.584 }' 00:13:14.584 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.584 01:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.843 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.844 [2024-10-15 01:14:27.537631] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.844 01:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.103 01:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.103 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.103 "name": "Existed_Raid", 00:13:15.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.103 "strip_size_kb": 64, 00:13:15.103 "state": "configuring", 00:13:15.103 "raid_level": "raid5f", 00:13:15.103 "superblock": false, 00:13:15.103 "num_base_bdevs": 3, 00:13:15.103 "num_base_bdevs_discovered": 1, 00:13:15.103 "num_base_bdevs_operational": 3, 00:13:15.103 "base_bdevs_list": [ 00:13:15.103 { 00:13:15.103 "name": null, 00:13:15.103 "uuid": "5675f71c-356f-480a-90e9-41879a88fc3b", 00:13:15.103 "is_configured": false, 00:13:15.103 "data_offset": 0, 00:13:15.103 "data_size": 65536 00:13:15.103 }, 00:13:15.103 { 00:13:15.103 "name": null, 00:13:15.103 "uuid": "7ccc92d4-3e1d-419f-b316-e73f7c424a24", 00:13:15.103 "is_configured": false, 00:13:15.103 "data_offset": 0, 00:13:15.103 "data_size": 65536 00:13:15.103 }, 00:13:15.103 { 00:13:15.103 "name": "BaseBdev3", 00:13:15.103 "uuid": "d3ecc73c-8b7a-4045-a3db-d905e7ab7ede", 00:13:15.103 "is_configured": true, 00:13:15.103 "data_offset": 0, 00:13:15.103 "data_size": 65536 00:13:15.103 } 00:13:15.103 ] 00:13:15.103 }' 00:13:15.103 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.103 01:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.363 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:15.363 01:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.363 01:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.363 01:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.363 [2024-10-15 01:14:28.023304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.363 "name": "Existed_Raid", 00:13:15.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.363 "strip_size_kb": 64, 00:13:15.363 "state": "configuring", 00:13:15.363 "raid_level": "raid5f", 00:13:15.363 "superblock": false, 00:13:15.363 "num_base_bdevs": 3, 00:13:15.363 "num_base_bdevs_discovered": 2, 00:13:15.363 "num_base_bdevs_operational": 3, 00:13:15.363 "base_bdevs_list": [ 00:13:15.363 { 00:13:15.363 "name": null, 00:13:15.363 "uuid": "5675f71c-356f-480a-90e9-41879a88fc3b", 00:13:15.363 "is_configured": false, 00:13:15.363 "data_offset": 0, 00:13:15.363 "data_size": 65536 00:13:15.363 }, 00:13:15.363 { 00:13:15.363 "name": "BaseBdev2", 00:13:15.363 "uuid": "7ccc92d4-3e1d-419f-b316-e73f7c424a24", 00:13:15.363 "is_configured": true, 00:13:15.363 "data_offset": 0, 00:13:15.363 "data_size": 65536 00:13:15.363 }, 00:13:15.363 { 00:13:15.363 "name": "BaseBdev3", 00:13:15.363 "uuid": "d3ecc73c-8b7a-4045-a3db-d905e7ab7ede", 00:13:15.363 "is_configured": true, 00:13:15.363 "data_offset": 0, 00:13:15.363 "data_size": 65536 00:13:15.363 } 00:13:15.363 ] 00:13:15.363 }' 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.363 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5675f71c-356f-480a-90e9-41879a88fc3b 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.933 [2024-10-15 01:14:28.577578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:15.933 [2024-10-15 01:14:28.577676] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:15.933 [2024-10-15 01:14:28.577704] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:15.933 [2024-10-15 01:14:28.577987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:13:15.933 [2024-10-15 01:14:28.578430] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:15.933 NewBaseBdev 00:13:15.933 [2024-10-15 01:14:28.578488] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:13:15.933 [2024-10-15 01:14:28.578669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.933 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.934 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:15.934 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.934 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.934 [ 00:13:15.934 { 00:13:15.934 "name": "NewBaseBdev", 00:13:15.934 "aliases": [ 00:13:15.934 "5675f71c-356f-480a-90e9-41879a88fc3b" 00:13:15.934 ], 00:13:15.934 "product_name": "Malloc disk", 00:13:15.934 "block_size": 512, 00:13:15.934 "num_blocks": 65536, 00:13:15.934 "uuid": "5675f71c-356f-480a-90e9-41879a88fc3b", 00:13:15.934 "assigned_rate_limits": { 00:13:15.934 "rw_ios_per_sec": 0, 00:13:15.934 "rw_mbytes_per_sec": 0, 00:13:15.934 "r_mbytes_per_sec": 0, 00:13:15.934 "w_mbytes_per_sec": 0 00:13:15.934 }, 00:13:15.934 "claimed": true, 00:13:15.934 "claim_type": "exclusive_write", 00:13:15.934 "zoned": false, 00:13:15.934 "supported_io_types": { 00:13:15.934 "read": true, 00:13:15.934 "write": true, 00:13:15.934 "unmap": true, 00:13:15.934 "flush": true, 00:13:15.934 "reset": true, 00:13:15.934 "nvme_admin": false, 00:13:15.934 "nvme_io": false, 00:13:15.934 "nvme_io_md": false, 00:13:15.934 "write_zeroes": true, 00:13:15.934 "zcopy": true, 00:13:15.934 "get_zone_info": false, 00:13:15.934 "zone_management": false, 00:13:15.934 "zone_append": false, 00:13:15.934 "compare": false, 00:13:15.934 "compare_and_write": false, 00:13:15.934 "abort": true, 00:13:15.934 "seek_hole": false, 00:13:15.934 "seek_data": false, 00:13:15.934 "copy": true, 00:13:15.934 "nvme_iov_md": false 00:13:15.934 }, 00:13:15.934 "memory_domains": [ 00:13:15.934 { 00:13:15.934 "dma_device_id": "system", 00:13:15.934 "dma_device_type": 1 00:13:15.934 }, 00:13:15.934 { 00:13:15.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.934 "dma_device_type": 2 00:13:15.934 } 00:13:15.934 ], 00:13:15.934 "driver_specific": {} 00:13:15.934 } 00:13:15.934 ] 00:13:15.934 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.934 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:15.934 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:15.934 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.934 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.934 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:15.934 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.934 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:15.934 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.934 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.934 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.934 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.934 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.934 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.934 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.934 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.934 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.193 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.193 "name": "Existed_Raid", 00:13:16.193 "uuid": "457c58a1-dc09-4b6c-be21-ef9799f19689", 00:13:16.193 "strip_size_kb": 64, 00:13:16.193 "state": "online", 00:13:16.193 "raid_level": "raid5f", 00:13:16.193 "superblock": false, 00:13:16.193 "num_base_bdevs": 3, 00:13:16.193 "num_base_bdevs_discovered": 3, 00:13:16.193 "num_base_bdevs_operational": 3, 00:13:16.193 "base_bdevs_list": [ 00:13:16.193 { 00:13:16.193 "name": "NewBaseBdev", 00:13:16.193 "uuid": "5675f71c-356f-480a-90e9-41879a88fc3b", 00:13:16.193 "is_configured": true, 00:13:16.193 "data_offset": 0, 00:13:16.193 "data_size": 65536 00:13:16.193 }, 00:13:16.193 { 00:13:16.193 "name": "BaseBdev2", 00:13:16.193 "uuid": "7ccc92d4-3e1d-419f-b316-e73f7c424a24", 00:13:16.193 "is_configured": true, 00:13:16.193 "data_offset": 0, 00:13:16.193 "data_size": 65536 00:13:16.193 }, 00:13:16.193 { 00:13:16.193 "name": "BaseBdev3", 00:13:16.193 "uuid": "d3ecc73c-8b7a-4045-a3db-d905e7ab7ede", 00:13:16.193 "is_configured": true, 00:13:16.193 "data_offset": 0, 00:13:16.193 "data_size": 65536 00:13:16.193 } 00:13:16.193 ] 00:13:16.193 }' 00:13:16.193 01:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.193 01:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.453 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:16.453 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:16.453 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:16.453 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:16.453 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:16.453 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:16.453 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:16.453 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:16.453 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.453 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.453 [2024-10-15 01:14:29.068989] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:16.453 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.453 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:16.453 "name": "Existed_Raid", 00:13:16.453 "aliases": [ 00:13:16.453 "457c58a1-dc09-4b6c-be21-ef9799f19689" 00:13:16.453 ], 00:13:16.453 "product_name": "Raid Volume", 00:13:16.453 "block_size": 512, 00:13:16.453 "num_blocks": 131072, 00:13:16.453 "uuid": "457c58a1-dc09-4b6c-be21-ef9799f19689", 00:13:16.453 "assigned_rate_limits": { 00:13:16.453 "rw_ios_per_sec": 0, 00:13:16.453 "rw_mbytes_per_sec": 0, 00:13:16.453 "r_mbytes_per_sec": 0, 00:13:16.453 "w_mbytes_per_sec": 0 00:13:16.453 }, 00:13:16.453 "claimed": false, 00:13:16.453 "zoned": false, 00:13:16.453 "supported_io_types": { 00:13:16.453 "read": true, 00:13:16.453 "write": true, 00:13:16.453 "unmap": false, 00:13:16.453 "flush": false, 00:13:16.453 "reset": true, 00:13:16.453 "nvme_admin": false, 00:13:16.453 "nvme_io": false, 00:13:16.453 "nvme_io_md": false, 00:13:16.453 "write_zeroes": true, 00:13:16.453 "zcopy": false, 00:13:16.453 "get_zone_info": false, 00:13:16.453 "zone_management": false, 00:13:16.453 "zone_append": false, 00:13:16.453 "compare": false, 00:13:16.453 "compare_and_write": false, 00:13:16.453 "abort": false, 00:13:16.453 "seek_hole": false, 00:13:16.453 "seek_data": false, 00:13:16.453 "copy": false, 00:13:16.453 "nvme_iov_md": false 00:13:16.453 }, 00:13:16.453 "driver_specific": { 00:13:16.453 "raid": { 00:13:16.453 "uuid": "457c58a1-dc09-4b6c-be21-ef9799f19689", 00:13:16.453 "strip_size_kb": 64, 00:13:16.453 "state": "online", 00:13:16.453 "raid_level": "raid5f", 00:13:16.453 "superblock": false, 00:13:16.453 "num_base_bdevs": 3, 00:13:16.453 "num_base_bdevs_discovered": 3, 00:13:16.453 "num_base_bdevs_operational": 3, 00:13:16.453 "base_bdevs_list": [ 00:13:16.453 { 00:13:16.453 "name": "NewBaseBdev", 00:13:16.453 "uuid": "5675f71c-356f-480a-90e9-41879a88fc3b", 00:13:16.453 "is_configured": true, 00:13:16.453 "data_offset": 0, 00:13:16.453 "data_size": 65536 00:13:16.453 }, 00:13:16.453 { 00:13:16.453 "name": "BaseBdev2", 00:13:16.453 "uuid": "7ccc92d4-3e1d-419f-b316-e73f7c424a24", 00:13:16.453 "is_configured": true, 00:13:16.453 "data_offset": 0, 00:13:16.453 "data_size": 65536 00:13:16.453 }, 00:13:16.453 { 00:13:16.453 "name": "BaseBdev3", 00:13:16.453 "uuid": "d3ecc73c-8b7a-4045-a3db-d905e7ab7ede", 00:13:16.453 "is_configured": true, 00:13:16.453 "data_offset": 0, 00:13:16.453 "data_size": 65536 00:13:16.453 } 00:13:16.453 ] 00:13:16.453 } 00:13:16.453 } 00:13:16.453 }' 00:13:16.453 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:16.453 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:16.453 BaseBdev2 00:13:16.453 BaseBdev3' 00:13:16.453 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.718 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:16.718 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.718 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.718 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:16.718 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.718 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.718 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.718 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.718 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.718 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.718 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.718 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.719 [2024-10-15 01:14:29.336299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:16.719 [2024-10-15 01:14:29.336365] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:16.719 [2024-10-15 01:14:29.336481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:16.719 [2024-10-15 01:14:29.336746] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:16.719 [2024-10-15 01:14:29.336772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90200 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 90200 ']' 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 90200 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90200 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:16.719 killing process with pid 90200 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90200' 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 90200 00:13:16.719 [2024-10-15 01:14:29.383360] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:16.719 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 90200 00:13:16.719 [2024-10-15 01:14:29.413776] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:16.994 00:13:16.994 real 0m8.792s 00:13:16.994 user 0m15.100s 00:13:16.994 sys 0m1.711s 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.994 ************************************ 00:13:16.994 END TEST raid5f_state_function_test 00:13:16.994 ************************************ 00:13:16.994 01:14:29 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:13:16.994 01:14:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:16.994 01:14:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.994 01:14:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:16.994 ************************************ 00:13:16.994 START TEST raid5f_state_function_test_sb 00:13:16.994 ************************************ 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:16.994 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:17.263 Process raid pid: 90799 00:13:17.263 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=90799 00:13:17.263 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:17.263 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90799' 00:13:17.263 01:14:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 90799 00:13:17.263 01:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 90799 ']' 00:13:17.263 01:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.263 01:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:17.263 01:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.263 01:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:17.263 01:14:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.263 [2024-10-15 01:14:29.792232] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:13:17.263 [2024-10-15 01:14:29.792426] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.263 [2024-10-15 01:14:29.918490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.263 [2024-10-15 01:14:29.943534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.529 [2024-10-15 01:14:29.987539] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.529 [2024-10-15 01:14:29.987654] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.099 [2024-10-15 01:14:30.621759] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:18.099 [2024-10-15 01:14:30.621871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:18.099 [2024-10-15 01:14:30.621900] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:18.099 [2024-10-15 01:14:30.621922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:18.099 [2024-10-15 01:14:30.621940] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:18.099 [2024-10-15 01:14:30.621965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.099 "name": "Existed_Raid", 00:13:18.099 "uuid": "0b8e3f3a-dccf-4396-be49-da19fe646b5d", 00:13:18.099 "strip_size_kb": 64, 00:13:18.099 "state": "configuring", 00:13:18.099 "raid_level": "raid5f", 00:13:18.099 "superblock": true, 00:13:18.099 "num_base_bdevs": 3, 00:13:18.099 "num_base_bdevs_discovered": 0, 00:13:18.099 "num_base_bdevs_operational": 3, 00:13:18.099 "base_bdevs_list": [ 00:13:18.099 { 00:13:18.099 "name": "BaseBdev1", 00:13:18.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.099 "is_configured": false, 00:13:18.099 "data_offset": 0, 00:13:18.099 "data_size": 0 00:13:18.099 }, 00:13:18.099 { 00:13:18.099 "name": "BaseBdev2", 00:13:18.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.099 "is_configured": false, 00:13:18.099 "data_offset": 0, 00:13:18.099 "data_size": 0 00:13:18.099 }, 00:13:18.099 { 00:13:18.099 "name": "BaseBdev3", 00:13:18.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.099 "is_configured": false, 00:13:18.099 "data_offset": 0, 00:13:18.099 "data_size": 0 00:13:18.099 } 00:13:18.099 ] 00:13:18.099 }' 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.099 01:14:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.669 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:18.669 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.669 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.669 [2024-10-15 01:14:31.092862] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:18.669 [2024-10-15 01:14:31.092952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:13:18.669 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.670 [2024-10-15 01:14:31.104867] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:18.670 [2024-10-15 01:14:31.104945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:18.670 [2024-10-15 01:14:31.104972] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:18.670 [2024-10-15 01:14:31.104994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:18.670 [2024-10-15 01:14:31.105011] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:18.670 [2024-10-15 01:14:31.105032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.670 [2024-10-15 01:14:31.125818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:18.670 BaseBdev1 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.670 [ 00:13:18.670 { 00:13:18.670 "name": "BaseBdev1", 00:13:18.670 "aliases": [ 00:13:18.670 "5441ba04-383d-4e97-bc19-85f0a12a8710" 00:13:18.670 ], 00:13:18.670 "product_name": "Malloc disk", 00:13:18.670 "block_size": 512, 00:13:18.670 "num_blocks": 65536, 00:13:18.670 "uuid": "5441ba04-383d-4e97-bc19-85f0a12a8710", 00:13:18.670 "assigned_rate_limits": { 00:13:18.670 "rw_ios_per_sec": 0, 00:13:18.670 "rw_mbytes_per_sec": 0, 00:13:18.670 "r_mbytes_per_sec": 0, 00:13:18.670 "w_mbytes_per_sec": 0 00:13:18.670 }, 00:13:18.670 "claimed": true, 00:13:18.670 "claim_type": "exclusive_write", 00:13:18.670 "zoned": false, 00:13:18.670 "supported_io_types": { 00:13:18.670 "read": true, 00:13:18.670 "write": true, 00:13:18.670 "unmap": true, 00:13:18.670 "flush": true, 00:13:18.670 "reset": true, 00:13:18.670 "nvme_admin": false, 00:13:18.670 "nvme_io": false, 00:13:18.670 "nvme_io_md": false, 00:13:18.670 "write_zeroes": true, 00:13:18.670 "zcopy": true, 00:13:18.670 "get_zone_info": false, 00:13:18.670 "zone_management": false, 00:13:18.670 "zone_append": false, 00:13:18.670 "compare": false, 00:13:18.670 "compare_and_write": false, 00:13:18.670 "abort": true, 00:13:18.670 "seek_hole": false, 00:13:18.670 "seek_data": false, 00:13:18.670 "copy": true, 00:13:18.670 "nvme_iov_md": false 00:13:18.670 }, 00:13:18.670 "memory_domains": [ 00:13:18.670 { 00:13:18.670 "dma_device_id": "system", 00:13:18.670 "dma_device_type": 1 00:13:18.670 }, 00:13:18.670 { 00:13:18.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.670 "dma_device_type": 2 00:13:18.670 } 00:13:18.670 ], 00:13:18.670 "driver_specific": {} 00:13:18.670 } 00:13:18.670 ] 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.670 "name": "Existed_Raid", 00:13:18.670 "uuid": "decba61a-c494-43c6-8ce3-f9fe285c07d7", 00:13:18.670 "strip_size_kb": 64, 00:13:18.670 "state": "configuring", 00:13:18.670 "raid_level": "raid5f", 00:13:18.670 "superblock": true, 00:13:18.670 "num_base_bdevs": 3, 00:13:18.670 "num_base_bdevs_discovered": 1, 00:13:18.670 "num_base_bdevs_operational": 3, 00:13:18.670 "base_bdevs_list": [ 00:13:18.670 { 00:13:18.670 "name": "BaseBdev1", 00:13:18.670 "uuid": "5441ba04-383d-4e97-bc19-85f0a12a8710", 00:13:18.670 "is_configured": true, 00:13:18.670 "data_offset": 2048, 00:13:18.670 "data_size": 63488 00:13:18.670 }, 00:13:18.670 { 00:13:18.670 "name": "BaseBdev2", 00:13:18.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.670 "is_configured": false, 00:13:18.670 "data_offset": 0, 00:13:18.670 "data_size": 0 00:13:18.670 }, 00:13:18.670 { 00:13:18.670 "name": "BaseBdev3", 00:13:18.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.670 "is_configured": false, 00:13:18.670 "data_offset": 0, 00:13:18.670 "data_size": 0 00:13:18.670 } 00:13:18.670 ] 00:13:18.670 }' 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.670 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.931 [2024-10-15 01:14:31.581083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:18.931 [2024-10-15 01:14:31.581135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.931 [2024-10-15 01:14:31.589137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:18.931 [2024-10-15 01:14:31.590920] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:18.931 [2024-10-15 01:14:31.590957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:18.931 [2024-10-15 01:14:31.590966] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:18.931 [2024-10-15 01:14:31.590976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.931 "name": "Existed_Raid", 00:13:18.931 "uuid": "045f1fb6-297b-4aee-ae1f-1adef75f54d9", 00:13:18.931 "strip_size_kb": 64, 00:13:18.931 "state": "configuring", 00:13:18.931 "raid_level": "raid5f", 00:13:18.931 "superblock": true, 00:13:18.931 "num_base_bdevs": 3, 00:13:18.931 "num_base_bdevs_discovered": 1, 00:13:18.931 "num_base_bdevs_operational": 3, 00:13:18.931 "base_bdevs_list": [ 00:13:18.931 { 00:13:18.931 "name": "BaseBdev1", 00:13:18.931 "uuid": "5441ba04-383d-4e97-bc19-85f0a12a8710", 00:13:18.931 "is_configured": true, 00:13:18.931 "data_offset": 2048, 00:13:18.931 "data_size": 63488 00:13:18.931 }, 00:13:18.931 { 00:13:18.931 "name": "BaseBdev2", 00:13:18.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.931 "is_configured": false, 00:13:18.931 "data_offset": 0, 00:13:18.931 "data_size": 0 00:13:18.931 }, 00:13:18.931 { 00:13:18.931 "name": "BaseBdev3", 00:13:18.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.931 "is_configured": false, 00:13:18.931 "data_offset": 0, 00:13:18.931 "data_size": 0 00:13:18.931 } 00:13:18.931 ] 00:13:18.931 }' 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.931 01:14:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.502 [2024-10-15 01:14:32.043477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:19.502 BaseBdev2 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.502 [ 00:13:19.502 { 00:13:19.502 "name": "BaseBdev2", 00:13:19.502 "aliases": [ 00:13:19.502 "55b0383c-d5e8-4f42-b0ab-38683eac84dc" 00:13:19.502 ], 00:13:19.502 "product_name": "Malloc disk", 00:13:19.502 "block_size": 512, 00:13:19.502 "num_blocks": 65536, 00:13:19.502 "uuid": "55b0383c-d5e8-4f42-b0ab-38683eac84dc", 00:13:19.502 "assigned_rate_limits": { 00:13:19.502 "rw_ios_per_sec": 0, 00:13:19.502 "rw_mbytes_per_sec": 0, 00:13:19.502 "r_mbytes_per_sec": 0, 00:13:19.502 "w_mbytes_per_sec": 0 00:13:19.502 }, 00:13:19.502 "claimed": true, 00:13:19.502 "claim_type": "exclusive_write", 00:13:19.502 "zoned": false, 00:13:19.502 "supported_io_types": { 00:13:19.502 "read": true, 00:13:19.502 "write": true, 00:13:19.502 "unmap": true, 00:13:19.502 "flush": true, 00:13:19.502 "reset": true, 00:13:19.502 "nvme_admin": false, 00:13:19.502 "nvme_io": false, 00:13:19.502 "nvme_io_md": false, 00:13:19.502 "write_zeroes": true, 00:13:19.502 "zcopy": true, 00:13:19.502 "get_zone_info": false, 00:13:19.502 "zone_management": false, 00:13:19.502 "zone_append": false, 00:13:19.502 "compare": false, 00:13:19.502 "compare_and_write": false, 00:13:19.502 "abort": true, 00:13:19.502 "seek_hole": false, 00:13:19.502 "seek_data": false, 00:13:19.502 "copy": true, 00:13:19.502 "nvme_iov_md": false 00:13:19.502 }, 00:13:19.502 "memory_domains": [ 00:13:19.502 { 00:13:19.502 "dma_device_id": "system", 00:13:19.502 "dma_device_type": 1 00:13:19.502 }, 00:13:19.502 { 00:13:19.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.502 "dma_device_type": 2 00:13:19.502 } 00:13:19.502 ], 00:13:19.502 "driver_specific": {} 00:13:19.502 } 00:13:19.502 ] 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.502 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.502 "name": "Existed_Raid", 00:13:19.502 "uuid": "045f1fb6-297b-4aee-ae1f-1adef75f54d9", 00:13:19.502 "strip_size_kb": 64, 00:13:19.502 "state": "configuring", 00:13:19.502 "raid_level": "raid5f", 00:13:19.502 "superblock": true, 00:13:19.502 "num_base_bdevs": 3, 00:13:19.502 "num_base_bdevs_discovered": 2, 00:13:19.502 "num_base_bdevs_operational": 3, 00:13:19.502 "base_bdevs_list": [ 00:13:19.502 { 00:13:19.502 "name": "BaseBdev1", 00:13:19.502 "uuid": "5441ba04-383d-4e97-bc19-85f0a12a8710", 00:13:19.502 "is_configured": true, 00:13:19.502 "data_offset": 2048, 00:13:19.502 "data_size": 63488 00:13:19.502 }, 00:13:19.502 { 00:13:19.502 "name": "BaseBdev2", 00:13:19.502 "uuid": "55b0383c-d5e8-4f42-b0ab-38683eac84dc", 00:13:19.502 "is_configured": true, 00:13:19.502 "data_offset": 2048, 00:13:19.502 "data_size": 63488 00:13:19.502 }, 00:13:19.502 { 00:13:19.502 "name": "BaseBdev3", 00:13:19.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.502 "is_configured": false, 00:13:19.503 "data_offset": 0, 00:13:19.503 "data_size": 0 00:13:19.503 } 00:13:19.503 ] 00:13:19.503 }' 00:13:19.503 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.503 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.072 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:20.072 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.072 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.072 [2024-10-15 01:14:32.522601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:20.072 [2024-10-15 01:14:32.522879] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:20.072 [2024-10-15 01:14:32.522922] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:20.072 [2024-10-15 01:14:32.523237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:13:20.072 BaseBdev3 00:13:20.072 [2024-10-15 01:14:32.523758] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:20.072 [2024-10-15 01:14:32.523786] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:13:20.072 [2024-10-15 01:14:32.523961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.072 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.072 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:20.072 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:20.072 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:20.072 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.073 [ 00:13:20.073 { 00:13:20.073 "name": "BaseBdev3", 00:13:20.073 "aliases": [ 00:13:20.073 "1ffbd28d-03b3-47c0-aa66-d9592a83217e" 00:13:20.073 ], 00:13:20.073 "product_name": "Malloc disk", 00:13:20.073 "block_size": 512, 00:13:20.073 "num_blocks": 65536, 00:13:20.073 "uuid": "1ffbd28d-03b3-47c0-aa66-d9592a83217e", 00:13:20.073 "assigned_rate_limits": { 00:13:20.073 "rw_ios_per_sec": 0, 00:13:20.073 "rw_mbytes_per_sec": 0, 00:13:20.073 "r_mbytes_per_sec": 0, 00:13:20.073 "w_mbytes_per_sec": 0 00:13:20.073 }, 00:13:20.073 "claimed": true, 00:13:20.073 "claim_type": "exclusive_write", 00:13:20.073 "zoned": false, 00:13:20.073 "supported_io_types": { 00:13:20.073 "read": true, 00:13:20.073 "write": true, 00:13:20.073 "unmap": true, 00:13:20.073 "flush": true, 00:13:20.073 "reset": true, 00:13:20.073 "nvme_admin": false, 00:13:20.073 "nvme_io": false, 00:13:20.073 "nvme_io_md": false, 00:13:20.073 "write_zeroes": true, 00:13:20.073 "zcopy": true, 00:13:20.073 "get_zone_info": false, 00:13:20.073 "zone_management": false, 00:13:20.073 "zone_append": false, 00:13:20.073 "compare": false, 00:13:20.073 "compare_and_write": false, 00:13:20.073 "abort": true, 00:13:20.073 "seek_hole": false, 00:13:20.073 "seek_data": false, 00:13:20.073 "copy": true, 00:13:20.073 "nvme_iov_md": false 00:13:20.073 }, 00:13:20.073 "memory_domains": [ 00:13:20.073 { 00:13:20.073 "dma_device_id": "system", 00:13:20.073 "dma_device_type": 1 00:13:20.073 }, 00:13:20.073 { 00:13:20.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.073 "dma_device_type": 2 00:13:20.073 } 00:13:20.073 ], 00:13:20.073 "driver_specific": {} 00:13:20.073 } 00:13:20.073 ] 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.073 "name": "Existed_Raid", 00:13:20.073 "uuid": "045f1fb6-297b-4aee-ae1f-1adef75f54d9", 00:13:20.073 "strip_size_kb": 64, 00:13:20.073 "state": "online", 00:13:20.073 "raid_level": "raid5f", 00:13:20.073 "superblock": true, 00:13:20.073 "num_base_bdevs": 3, 00:13:20.073 "num_base_bdevs_discovered": 3, 00:13:20.073 "num_base_bdevs_operational": 3, 00:13:20.073 "base_bdevs_list": [ 00:13:20.073 { 00:13:20.073 "name": "BaseBdev1", 00:13:20.073 "uuid": "5441ba04-383d-4e97-bc19-85f0a12a8710", 00:13:20.073 "is_configured": true, 00:13:20.073 "data_offset": 2048, 00:13:20.073 "data_size": 63488 00:13:20.073 }, 00:13:20.073 { 00:13:20.073 "name": "BaseBdev2", 00:13:20.073 "uuid": "55b0383c-d5e8-4f42-b0ab-38683eac84dc", 00:13:20.073 "is_configured": true, 00:13:20.073 "data_offset": 2048, 00:13:20.073 "data_size": 63488 00:13:20.073 }, 00:13:20.073 { 00:13:20.073 "name": "BaseBdev3", 00:13:20.073 "uuid": "1ffbd28d-03b3-47c0-aa66-d9592a83217e", 00:13:20.073 "is_configured": true, 00:13:20.073 "data_offset": 2048, 00:13:20.073 "data_size": 63488 00:13:20.073 } 00:13:20.073 ] 00:13:20.073 }' 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.073 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.333 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:20.333 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:20.333 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:20.333 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:20.333 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:20.333 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:20.333 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:20.333 01:14:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:20.333 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.333 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.333 [2024-10-15 01:14:32.978077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:20.333 01:14:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.333 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:20.333 "name": "Existed_Raid", 00:13:20.333 "aliases": [ 00:13:20.333 "045f1fb6-297b-4aee-ae1f-1adef75f54d9" 00:13:20.333 ], 00:13:20.333 "product_name": "Raid Volume", 00:13:20.333 "block_size": 512, 00:13:20.333 "num_blocks": 126976, 00:13:20.333 "uuid": "045f1fb6-297b-4aee-ae1f-1adef75f54d9", 00:13:20.333 "assigned_rate_limits": { 00:13:20.333 "rw_ios_per_sec": 0, 00:13:20.333 "rw_mbytes_per_sec": 0, 00:13:20.333 "r_mbytes_per_sec": 0, 00:13:20.333 "w_mbytes_per_sec": 0 00:13:20.333 }, 00:13:20.333 "claimed": false, 00:13:20.333 "zoned": false, 00:13:20.333 "supported_io_types": { 00:13:20.333 "read": true, 00:13:20.333 "write": true, 00:13:20.333 "unmap": false, 00:13:20.333 "flush": false, 00:13:20.333 "reset": true, 00:13:20.333 "nvme_admin": false, 00:13:20.333 "nvme_io": false, 00:13:20.333 "nvme_io_md": false, 00:13:20.333 "write_zeroes": true, 00:13:20.333 "zcopy": false, 00:13:20.333 "get_zone_info": false, 00:13:20.333 "zone_management": false, 00:13:20.333 "zone_append": false, 00:13:20.333 "compare": false, 00:13:20.333 "compare_and_write": false, 00:13:20.333 "abort": false, 00:13:20.333 "seek_hole": false, 00:13:20.333 "seek_data": false, 00:13:20.333 "copy": false, 00:13:20.333 "nvme_iov_md": false 00:13:20.333 }, 00:13:20.333 "driver_specific": { 00:13:20.333 "raid": { 00:13:20.333 "uuid": "045f1fb6-297b-4aee-ae1f-1adef75f54d9", 00:13:20.333 "strip_size_kb": 64, 00:13:20.333 "state": "online", 00:13:20.333 "raid_level": "raid5f", 00:13:20.333 "superblock": true, 00:13:20.333 "num_base_bdevs": 3, 00:13:20.333 "num_base_bdevs_discovered": 3, 00:13:20.333 "num_base_bdevs_operational": 3, 00:13:20.333 "base_bdevs_list": [ 00:13:20.333 { 00:13:20.333 "name": "BaseBdev1", 00:13:20.333 "uuid": "5441ba04-383d-4e97-bc19-85f0a12a8710", 00:13:20.333 "is_configured": true, 00:13:20.333 "data_offset": 2048, 00:13:20.333 "data_size": 63488 00:13:20.333 }, 00:13:20.333 { 00:13:20.333 "name": "BaseBdev2", 00:13:20.333 "uuid": "55b0383c-d5e8-4f42-b0ab-38683eac84dc", 00:13:20.333 "is_configured": true, 00:13:20.333 "data_offset": 2048, 00:13:20.333 "data_size": 63488 00:13:20.333 }, 00:13:20.333 { 00:13:20.333 "name": "BaseBdev3", 00:13:20.333 "uuid": "1ffbd28d-03b3-47c0-aa66-d9592a83217e", 00:13:20.333 "is_configured": true, 00:13:20.333 "data_offset": 2048, 00:13:20.333 "data_size": 63488 00:13:20.333 } 00:13:20.333 ] 00:13:20.334 } 00:13:20.334 } 00:13:20.334 }' 00:13:20.334 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:20.594 BaseBdev2 00:13:20.594 BaseBdev3' 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.594 [2024-10-15 01:14:33.269414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.594 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.854 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.854 "name": "Existed_Raid", 00:13:20.854 "uuid": "045f1fb6-297b-4aee-ae1f-1adef75f54d9", 00:13:20.854 "strip_size_kb": 64, 00:13:20.854 "state": "online", 00:13:20.854 "raid_level": "raid5f", 00:13:20.854 "superblock": true, 00:13:20.854 "num_base_bdevs": 3, 00:13:20.854 "num_base_bdevs_discovered": 2, 00:13:20.854 "num_base_bdevs_operational": 2, 00:13:20.854 "base_bdevs_list": [ 00:13:20.854 { 00:13:20.854 "name": null, 00:13:20.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.854 "is_configured": false, 00:13:20.854 "data_offset": 0, 00:13:20.854 "data_size": 63488 00:13:20.854 }, 00:13:20.854 { 00:13:20.854 "name": "BaseBdev2", 00:13:20.854 "uuid": "55b0383c-d5e8-4f42-b0ab-38683eac84dc", 00:13:20.854 "is_configured": true, 00:13:20.854 "data_offset": 2048, 00:13:20.854 "data_size": 63488 00:13:20.854 }, 00:13:20.854 { 00:13:20.854 "name": "BaseBdev3", 00:13:20.854 "uuid": "1ffbd28d-03b3-47c0-aa66-d9592a83217e", 00:13:20.854 "is_configured": true, 00:13:20.854 "data_offset": 2048, 00:13:20.854 "data_size": 63488 00:13:20.854 } 00:13:20.854 ] 00:13:20.854 }' 00:13:20.854 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.854 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.114 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:21.114 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:21.114 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.114 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.114 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.114 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:21.114 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.114 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:21.114 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:21.114 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:21.115 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.115 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.115 [2024-10-15 01:14:33.792059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:21.115 [2024-10-15 01:14:33.792280] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:21.115 [2024-10-15 01:14:33.803420] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:21.115 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.115 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:21.115 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:21.115 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.115 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:21.115 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.115 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.115 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.375 [2024-10-15 01:14:33.863338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:21.375 [2024-10-15 01:14:33.863383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.375 BaseBdev2 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.375 [ 00:13:21.375 { 00:13:21.375 "name": "BaseBdev2", 00:13:21.375 "aliases": [ 00:13:21.375 "a3b7d39c-6e58-46d6-8682-ff0480ef0779" 00:13:21.375 ], 00:13:21.375 "product_name": "Malloc disk", 00:13:21.375 "block_size": 512, 00:13:21.375 "num_blocks": 65536, 00:13:21.375 "uuid": "a3b7d39c-6e58-46d6-8682-ff0480ef0779", 00:13:21.375 "assigned_rate_limits": { 00:13:21.375 "rw_ios_per_sec": 0, 00:13:21.375 "rw_mbytes_per_sec": 0, 00:13:21.375 "r_mbytes_per_sec": 0, 00:13:21.375 "w_mbytes_per_sec": 0 00:13:21.375 }, 00:13:21.375 "claimed": false, 00:13:21.375 "zoned": false, 00:13:21.375 "supported_io_types": { 00:13:21.375 "read": true, 00:13:21.375 "write": true, 00:13:21.375 "unmap": true, 00:13:21.375 "flush": true, 00:13:21.375 "reset": true, 00:13:21.375 "nvme_admin": false, 00:13:21.375 "nvme_io": false, 00:13:21.375 "nvme_io_md": false, 00:13:21.375 "write_zeroes": true, 00:13:21.375 "zcopy": true, 00:13:21.375 "get_zone_info": false, 00:13:21.375 "zone_management": false, 00:13:21.375 "zone_append": false, 00:13:21.375 "compare": false, 00:13:21.375 "compare_and_write": false, 00:13:21.375 "abort": true, 00:13:21.375 "seek_hole": false, 00:13:21.375 "seek_data": false, 00:13:21.375 "copy": true, 00:13:21.375 "nvme_iov_md": false 00:13:21.375 }, 00:13:21.375 "memory_domains": [ 00:13:21.375 { 00:13:21.375 "dma_device_id": "system", 00:13:21.375 "dma_device_type": 1 00:13:21.375 }, 00:13:21.375 { 00:13:21.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.375 "dma_device_type": 2 00:13:21.375 } 00:13:21.375 ], 00:13:21.375 "driver_specific": {} 00:13:21.375 } 00:13:21.375 ] 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:21.375 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:21.376 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:21.376 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:21.376 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.376 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.376 BaseBdev3 00:13:21.376 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.376 01:14:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:21.376 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:21.376 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:21.376 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:21.376 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:21.376 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:21.376 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:21.376 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.376 01:14:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.376 [ 00:13:21.376 { 00:13:21.376 "name": "BaseBdev3", 00:13:21.376 "aliases": [ 00:13:21.376 "050f0a6b-4c2d-4eba-9a11-a251262c3a76" 00:13:21.376 ], 00:13:21.376 "product_name": "Malloc disk", 00:13:21.376 "block_size": 512, 00:13:21.376 "num_blocks": 65536, 00:13:21.376 "uuid": "050f0a6b-4c2d-4eba-9a11-a251262c3a76", 00:13:21.376 "assigned_rate_limits": { 00:13:21.376 "rw_ios_per_sec": 0, 00:13:21.376 "rw_mbytes_per_sec": 0, 00:13:21.376 "r_mbytes_per_sec": 0, 00:13:21.376 "w_mbytes_per_sec": 0 00:13:21.376 }, 00:13:21.376 "claimed": false, 00:13:21.376 "zoned": false, 00:13:21.376 "supported_io_types": { 00:13:21.376 "read": true, 00:13:21.376 "write": true, 00:13:21.376 "unmap": true, 00:13:21.376 "flush": true, 00:13:21.376 "reset": true, 00:13:21.376 "nvme_admin": false, 00:13:21.376 "nvme_io": false, 00:13:21.376 "nvme_io_md": false, 00:13:21.376 "write_zeroes": true, 00:13:21.376 "zcopy": true, 00:13:21.376 "get_zone_info": false, 00:13:21.376 "zone_management": false, 00:13:21.376 "zone_append": false, 00:13:21.376 "compare": false, 00:13:21.376 "compare_and_write": false, 00:13:21.376 "abort": true, 00:13:21.376 "seek_hole": false, 00:13:21.376 "seek_data": false, 00:13:21.376 "copy": true, 00:13:21.376 "nvme_iov_md": false 00:13:21.376 }, 00:13:21.376 "memory_domains": [ 00:13:21.376 { 00:13:21.376 "dma_device_id": "system", 00:13:21.376 "dma_device_type": 1 00:13:21.376 }, 00:13:21.376 { 00:13:21.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.376 "dma_device_type": 2 00:13:21.376 } 00:13:21.376 ], 00:13:21.376 "driver_specific": {} 00:13:21.376 } 00:13:21.376 ] 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.376 [2024-10-15 01:14:34.034108] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.376 [2024-10-15 01:14:34.034195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.376 [2024-10-15 01:14:34.034252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:21.376 [2024-10-15 01:14:34.036027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.376 "name": "Existed_Raid", 00:13:21.376 "uuid": "7492db19-dbae-4391-bbfe-420b44527b47", 00:13:21.376 "strip_size_kb": 64, 00:13:21.376 "state": "configuring", 00:13:21.376 "raid_level": "raid5f", 00:13:21.376 "superblock": true, 00:13:21.376 "num_base_bdevs": 3, 00:13:21.376 "num_base_bdevs_discovered": 2, 00:13:21.376 "num_base_bdevs_operational": 3, 00:13:21.376 "base_bdevs_list": [ 00:13:21.376 { 00:13:21.376 "name": "BaseBdev1", 00:13:21.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.376 "is_configured": false, 00:13:21.376 "data_offset": 0, 00:13:21.376 "data_size": 0 00:13:21.376 }, 00:13:21.376 { 00:13:21.376 "name": "BaseBdev2", 00:13:21.376 "uuid": "a3b7d39c-6e58-46d6-8682-ff0480ef0779", 00:13:21.376 "is_configured": true, 00:13:21.376 "data_offset": 2048, 00:13:21.376 "data_size": 63488 00:13:21.376 }, 00:13:21.376 { 00:13:21.376 "name": "BaseBdev3", 00:13:21.376 "uuid": "050f0a6b-4c2d-4eba-9a11-a251262c3a76", 00:13:21.376 "is_configured": true, 00:13:21.376 "data_offset": 2048, 00:13:21.376 "data_size": 63488 00:13:21.376 } 00:13:21.376 ] 00:13:21.376 }' 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.376 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.947 [2024-10-15 01:14:34.469349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.947 "name": "Existed_Raid", 00:13:21.947 "uuid": "7492db19-dbae-4391-bbfe-420b44527b47", 00:13:21.947 "strip_size_kb": 64, 00:13:21.947 "state": "configuring", 00:13:21.947 "raid_level": "raid5f", 00:13:21.947 "superblock": true, 00:13:21.947 "num_base_bdevs": 3, 00:13:21.947 "num_base_bdevs_discovered": 1, 00:13:21.947 "num_base_bdevs_operational": 3, 00:13:21.947 "base_bdevs_list": [ 00:13:21.947 { 00:13:21.947 "name": "BaseBdev1", 00:13:21.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.947 "is_configured": false, 00:13:21.947 "data_offset": 0, 00:13:21.947 "data_size": 0 00:13:21.947 }, 00:13:21.947 { 00:13:21.947 "name": null, 00:13:21.947 "uuid": "a3b7d39c-6e58-46d6-8682-ff0480ef0779", 00:13:21.947 "is_configured": false, 00:13:21.947 "data_offset": 0, 00:13:21.947 "data_size": 63488 00:13:21.947 }, 00:13:21.947 { 00:13:21.947 "name": "BaseBdev3", 00:13:21.947 "uuid": "050f0a6b-4c2d-4eba-9a11-a251262c3a76", 00:13:21.947 "is_configured": true, 00:13:21.947 "data_offset": 2048, 00:13:21.947 "data_size": 63488 00:13:21.947 } 00:13:21.947 ] 00:13:21.947 }' 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.947 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.207 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:22.207 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.207 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.207 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.207 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.467 [2024-10-15 01:14:34.943778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.467 BaseBdev1 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.467 [ 00:13:22.467 { 00:13:22.467 "name": "BaseBdev1", 00:13:22.467 "aliases": [ 00:13:22.467 "07fb41a2-0569-444c-a37c-43e4d8cab894" 00:13:22.467 ], 00:13:22.467 "product_name": "Malloc disk", 00:13:22.467 "block_size": 512, 00:13:22.467 "num_blocks": 65536, 00:13:22.467 "uuid": "07fb41a2-0569-444c-a37c-43e4d8cab894", 00:13:22.467 "assigned_rate_limits": { 00:13:22.467 "rw_ios_per_sec": 0, 00:13:22.467 "rw_mbytes_per_sec": 0, 00:13:22.467 "r_mbytes_per_sec": 0, 00:13:22.467 "w_mbytes_per_sec": 0 00:13:22.467 }, 00:13:22.467 "claimed": true, 00:13:22.467 "claim_type": "exclusive_write", 00:13:22.467 "zoned": false, 00:13:22.467 "supported_io_types": { 00:13:22.467 "read": true, 00:13:22.467 "write": true, 00:13:22.467 "unmap": true, 00:13:22.467 "flush": true, 00:13:22.467 "reset": true, 00:13:22.467 "nvme_admin": false, 00:13:22.467 "nvme_io": false, 00:13:22.467 "nvme_io_md": false, 00:13:22.467 "write_zeroes": true, 00:13:22.467 "zcopy": true, 00:13:22.467 "get_zone_info": false, 00:13:22.467 "zone_management": false, 00:13:22.467 "zone_append": false, 00:13:22.467 "compare": false, 00:13:22.467 "compare_and_write": false, 00:13:22.467 "abort": true, 00:13:22.467 "seek_hole": false, 00:13:22.467 "seek_data": false, 00:13:22.467 "copy": true, 00:13:22.467 "nvme_iov_md": false 00:13:22.467 }, 00:13:22.467 "memory_domains": [ 00:13:22.467 { 00:13:22.467 "dma_device_id": "system", 00:13:22.467 "dma_device_type": 1 00:13:22.467 }, 00:13:22.467 { 00:13:22.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.467 "dma_device_type": 2 00:13:22.467 } 00:13:22.467 ], 00:13:22.467 "driver_specific": {} 00:13:22.467 } 00:13:22.467 ] 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.467 01:14:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.467 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.467 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.467 "name": "Existed_Raid", 00:13:22.467 "uuid": "7492db19-dbae-4391-bbfe-420b44527b47", 00:13:22.467 "strip_size_kb": 64, 00:13:22.467 "state": "configuring", 00:13:22.467 "raid_level": "raid5f", 00:13:22.467 "superblock": true, 00:13:22.467 "num_base_bdevs": 3, 00:13:22.467 "num_base_bdevs_discovered": 2, 00:13:22.467 "num_base_bdevs_operational": 3, 00:13:22.467 "base_bdevs_list": [ 00:13:22.467 { 00:13:22.467 "name": "BaseBdev1", 00:13:22.467 "uuid": "07fb41a2-0569-444c-a37c-43e4d8cab894", 00:13:22.467 "is_configured": true, 00:13:22.467 "data_offset": 2048, 00:13:22.467 "data_size": 63488 00:13:22.467 }, 00:13:22.467 { 00:13:22.467 "name": null, 00:13:22.467 "uuid": "a3b7d39c-6e58-46d6-8682-ff0480ef0779", 00:13:22.467 "is_configured": false, 00:13:22.467 "data_offset": 0, 00:13:22.467 "data_size": 63488 00:13:22.467 }, 00:13:22.467 { 00:13:22.467 "name": "BaseBdev3", 00:13:22.467 "uuid": "050f0a6b-4c2d-4eba-9a11-a251262c3a76", 00:13:22.467 "is_configured": true, 00:13:22.467 "data_offset": 2048, 00:13:22.467 "data_size": 63488 00:13:22.467 } 00:13:22.467 ] 00:13:22.467 }' 00:13:22.468 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.468 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.727 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.728 [2024-10-15 01:14:35.411028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.728 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.988 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.988 "name": "Existed_Raid", 00:13:22.988 "uuid": "7492db19-dbae-4391-bbfe-420b44527b47", 00:13:22.988 "strip_size_kb": 64, 00:13:22.988 "state": "configuring", 00:13:22.988 "raid_level": "raid5f", 00:13:22.988 "superblock": true, 00:13:22.988 "num_base_bdevs": 3, 00:13:22.988 "num_base_bdevs_discovered": 1, 00:13:22.988 "num_base_bdevs_operational": 3, 00:13:22.988 "base_bdevs_list": [ 00:13:22.988 { 00:13:22.988 "name": "BaseBdev1", 00:13:22.988 "uuid": "07fb41a2-0569-444c-a37c-43e4d8cab894", 00:13:22.988 "is_configured": true, 00:13:22.988 "data_offset": 2048, 00:13:22.988 "data_size": 63488 00:13:22.988 }, 00:13:22.988 { 00:13:22.988 "name": null, 00:13:22.988 "uuid": "a3b7d39c-6e58-46d6-8682-ff0480ef0779", 00:13:22.988 "is_configured": false, 00:13:22.988 "data_offset": 0, 00:13:22.988 "data_size": 63488 00:13:22.988 }, 00:13:22.988 { 00:13:22.988 "name": null, 00:13:22.988 "uuid": "050f0a6b-4c2d-4eba-9a11-a251262c3a76", 00:13:22.988 "is_configured": false, 00:13:22.988 "data_offset": 0, 00:13:22.988 "data_size": 63488 00:13:22.988 } 00:13:22.988 ] 00:13:22.988 }' 00:13:22.988 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.988 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.247 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:23.247 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.247 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.247 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.247 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.248 [2024-10-15 01:14:35.874276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.248 "name": "Existed_Raid", 00:13:23.248 "uuid": "7492db19-dbae-4391-bbfe-420b44527b47", 00:13:23.248 "strip_size_kb": 64, 00:13:23.248 "state": "configuring", 00:13:23.248 "raid_level": "raid5f", 00:13:23.248 "superblock": true, 00:13:23.248 "num_base_bdevs": 3, 00:13:23.248 "num_base_bdevs_discovered": 2, 00:13:23.248 "num_base_bdevs_operational": 3, 00:13:23.248 "base_bdevs_list": [ 00:13:23.248 { 00:13:23.248 "name": "BaseBdev1", 00:13:23.248 "uuid": "07fb41a2-0569-444c-a37c-43e4d8cab894", 00:13:23.248 "is_configured": true, 00:13:23.248 "data_offset": 2048, 00:13:23.248 "data_size": 63488 00:13:23.248 }, 00:13:23.248 { 00:13:23.248 "name": null, 00:13:23.248 "uuid": "a3b7d39c-6e58-46d6-8682-ff0480ef0779", 00:13:23.248 "is_configured": false, 00:13:23.248 "data_offset": 0, 00:13:23.248 "data_size": 63488 00:13:23.248 }, 00:13:23.248 { 00:13:23.248 "name": "BaseBdev3", 00:13:23.248 "uuid": "050f0a6b-4c2d-4eba-9a11-a251262c3a76", 00:13:23.248 "is_configured": true, 00:13:23.248 "data_offset": 2048, 00:13:23.248 "data_size": 63488 00:13:23.248 } 00:13:23.248 ] 00:13:23.248 }' 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.248 01:14:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.818 [2024-10-15 01:14:36.381405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.818 "name": "Existed_Raid", 00:13:23.818 "uuid": "7492db19-dbae-4391-bbfe-420b44527b47", 00:13:23.818 "strip_size_kb": 64, 00:13:23.818 "state": "configuring", 00:13:23.818 "raid_level": "raid5f", 00:13:23.818 "superblock": true, 00:13:23.818 "num_base_bdevs": 3, 00:13:23.818 "num_base_bdevs_discovered": 1, 00:13:23.818 "num_base_bdevs_operational": 3, 00:13:23.818 "base_bdevs_list": [ 00:13:23.818 { 00:13:23.818 "name": null, 00:13:23.818 "uuid": "07fb41a2-0569-444c-a37c-43e4d8cab894", 00:13:23.818 "is_configured": false, 00:13:23.818 "data_offset": 0, 00:13:23.818 "data_size": 63488 00:13:23.818 }, 00:13:23.818 { 00:13:23.818 "name": null, 00:13:23.818 "uuid": "a3b7d39c-6e58-46d6-8682-ff0480ef0779", 00:13:23.818 "is_configured": false, 00:13:23.818 "data_offset": 0, 00:13:23.818 "data_size": 63488 00:13:23.818 }, 00:13:23.818 { 00:13:23.818 "name": "BaseBdev3", 00:13:23.818 "uuid": "050f0a6b-4c2d-4eba-9a11-a251262c3a76", 00:13:23.818 "is_configured": true, 00:13:23.818 "data_offset": 2048, 00:13:23.818 "data_size": 63488 00:13:23.818 } 00:13:23.818 ] 00:13:23.818 }' 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.818 01:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.388 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.388 01:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.388 01:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.388 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:24.388 01:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.388 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:24.388 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:24.389 01:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.389 01:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.389 [2024-10-15 01:14:36.855270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:24.389 01:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.389 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:24.389 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.389 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.389 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:24.389 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.389 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.389 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.389 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.389 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.389 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.389 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.389 01:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.389 01:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.389 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.389 01:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.389 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.389 "name": "Existed_Raid", 00:13:24.389 "uuid": "7492db19-dbae-4391-bbfe-420b44527b47", 00:13:24.389 "strip_size_kb": 64, 00:13:24.389 "state": "configuring", 00:13:24.389 "raid_level": "raid5f", 00:13:24.389 "superblock": true, 00:13:24.389 "num_base_bdevs": 3, 00:13:24.389 "num_base_bdevs_discovered": 2, 00:13:24.389 "num_base_bdevs_operational": 3, 00:13:24.389 "base_bdevs_list": [ 00:13:24.389 { 00:13:24.389 "name": null, 00:13:24.389 "uuid": "07fb41a2-0569-444c-a37c-43e4d8cab894", 00:13:24.389 "is_configured": false, 00:13:24.389 "data_offset": 0, 00:13:24.389 "data_size": 63488 00:13:24.389 }, 00:13:24.389 { 00:13:24.389 "name": "BaseBdev2", 00:13:24.389 "uuid": "a3b7d39c-6e58-46d6-8682-ff0480ef0779", 00:13:24.389 "is_configured": true, 00:13:24.389 "data_offset": 2048, 00:13:24.389 "data_size": 63488 00:13:24.389 }, 00:13:24.389 { 00:13:24.389 "name": "BaseBdev3", 00:13:24.389 "uuid": "050f0a6b-4c2d-4eba-9a11-a251262c3a76", 00:13:24.389 "is_configured": true, 00:13:24.389 "data_offset": 2048, 00:13:24.389 "data_size": 63488 00:13:24.389 } 00:13:24.389 ] 00:13:24.389 }' 00:13:24.389 01:14:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.389 01:14:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.649 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:24.649 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.649 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.649 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.649 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.649 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:24.649 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.649 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.649 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.649 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:24.649 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.649 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 07fb41a2-0569-444c-a37c-43e4d8cab894 00:13:24.649 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.649 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.909 [2024-10-15 01:14:37.377627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:24.909 NewBaseBdev 00:13:24.909 [2024-10-15 01:14:37.377890] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:24.909 [2024-10-15 01:14:37.377929] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:24.909 [2024-10-15 01:14:37.378170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:13:24.909 [2024-10-15 01:14:37.378585] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:24.909 [2024-10-15 01:14:37.378597] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:13:24.909 [2024-10-15 01:14:37.378701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.909 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.909 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:24.909 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:24.909 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:24.909 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:24.909 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:24.909 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:24.909 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:24.909 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.909 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.909 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.909 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:24.909 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.909 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.909 [ 00:13:24.909 { 00:13:24.909 "name": "NewBaseBdev", 00:13:24.909 "aliases": [ 00:13:24.909 "07fb41a2-0569-444c-a37c-43e4d8cab894" 00:13:24.909 ], 00:13:24.909 "product_name": "Malloc disk", 00:13:24.909 "block_size": 512, 00:13:24.909 "num_blocks": 65536, 00:13:24.909 "uuid": "07fb41a2-0569-444c-a37c-43e4d8cab894", 00:13:24.909 "assigned_rate_limits": { 00:13:24.909 "rw_ios_per_sec": 0, 00:13:24.909 "rw_mbytes_per_sec": 0, 00:13:24.909 "r_mbytes_per_sec": 0, 00:13:24.909 "w_mbytes_per_sec": 0 00:13:24.909 }, 00:13:24.909 "claimed": true, 00:13:24.909 "claim_type": "exclusive_write", 00:13:24.909 "zoned": false, 00:13:24.909 "supported_io_types": { 00:13:24.909 "read": true, 00:13:24.909 "write": true, 00:13:24.909 "unmap": true, 00:13:24.909 "flush": true, 00:13:24.909 "reset": true, 00:13:24.909 "nvme_admin": false, 00:13:24.909 "nvme_io": false, 00:13:24.909 "nvme_io_md": false, 00:13:24.909 "write_zeroes": true, 00:13:24.909 "zcopy": true, 00:13:24.909 "get_zone_info": false, 00:13:24.909 "zone_management": false, 00:13:24.909 "zone_append": false, 00:13:24.909 "compare": false, 00:13:24.909 "compare_and_write": false, 00:13:24.909 "abort": true, 00:13:24.909 "seek_hole": false, 00:13:24.909 "seek_data": false, 00:13:24.909 "copy": true, 00:13:24.909 "nvme_iov_md": false 00:13:24.909 }, 00:13:24.909 "memory_domains": [ 00:13:24.909 { 00:13:24.909 "dma_device_id": "system", 00:13:24.909 "dma_device_type": 1 00:13:24.909 }, 00:13:24.909 { 00:13:24.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.909 "dma_device_type": 2 00:13:24.909 } 00:13:24.909 ], 00:13:24.909 "driver_specific": {} 00:13:24.909 } 00:13:24.909 ] 00:13:24.909 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.910 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:24.910 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:24.910 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.910 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.910 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:24.910 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.910 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.910 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.910 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.910 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.910 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.910 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.910 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.910 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.910 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.910 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.910 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.910 "name": "Existed_Raid", 00:13:24.910 "uuid": "7492db19-dbae-4391-bbfe-420b44527b47", 00:13:24.910 "strip_size_kb": 64, 00:13:24.910 "state": "online", 00:13:24.910 "raid_level": "raid5f", 00:13:24.910 "superblock": true, 00:13:24.910 "num_base_bdevs": 3, 00:13:24.910 "num_base_bdevs_discovered": 3, 00:13:24.910 "num_base_bdevs_operational": 3, 00:13:24.910 "base_bdevs_list": [ 00:13:24.910 { 00:13:24.910 "name": "NewBaseBdev", 00:13:24.910 "uuid": "07fb41a2-0569-444c-a37c-43e4d8cab894", 00:13:24.910 "is_configured": true, 00:13:24.910 "data_offset": 2048, 00:13:24.910 "data_size": 63488 00:13:24.910 }, 00:13:24.910 { 00:13:24.910 "name": "BaseBdev2", 00:13:24.910 "uuid": "a3b7d39c-6e58-46d6-8682-ff0480ef0779", 00:13:24.910 "is_configured": true, 00:13:24.910 "data_offset": 2048, 00:13:24.910 "data_size": 63488 00:13:24.910 }, 00:13:24.910 { 00:13:24.910 "name": "BaseBdev3", 00:13:24.910 "uuid": "050f0a6b-4c2d-4eba-9a11-a251262c3a76", 00:13:24.910 "is_configured": true, 00:13:24.910 "data_offset": 2048, 00:13:24.910 "data_size": 63488 00:13:24.910 } 00:13:24.910 ] 00:13:24.910 }' 00:13:24.910 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.910 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.170 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:25.170 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:25.170 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:25.170 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:25.170 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:25.170 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:25.170 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:25.170 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:25.170 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.170 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.170 [2024-10-15 01:14:37.849069] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:25.170 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.170 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:25.170 "name": "Existed_Raid", 00:13:25.170 "aliases": [ 00:13:25.170 "7492db19-dbae-4391-bbfe-420b44527b47" 00:13:25.170 ], 00:13:25.170 "product_name": "Raid Volume", 00:13:25.170 "block_size": 512, 00:13:25.170 "num_blocks": 126976, 00:13:25.170 "uuid": "7492db19-dbae-4391-bbfe-420b44527b47", 00:13:25.170 "assigned_rate_limits": { 00:13:25.170 "rw_ios_per_sec": 0, 00:13:25.170 "rw_mbytes_per_sec": 0, 00:13:25.170 "r_mbytes_per_sec": 0, 00:13:25.170 "w_mbytes_per_sec": 0 00:13:25.170 }, 00:13:25.170 "claimed": false, 00:13:25.170 "zoned": false, 00:13:25.170 "supported_io_types": { 00:13:25.170 "read": true, 00:13:25.170 "write": true, 00:13:25.170 "unmap": false, 00:13:25.170 "flush": false, 00:13:25.170 "reset": true, 00:13:25.170 "nvme_admin": false, 00:13:25.170 "nvme_io": false, 00:13:25.170 "nvme_io_md": false, 00:13:25.170 "write_zeroes": true, 00:13:25.170 "zcopy": false, 00:13:25.170 "get_zone_info": false, 00:13:25.170 "zone_management": false, 00:13:25.170 "zone_append": false, 00:13:25.170 "compare": false, 00:13:25.170 "compare_and_write": false, 00:13:25.170 "abort": false, 00:13:25.170 "seek_hole": false, 00:13:25.170 "seek_data": false, 00:13:25.170 "copy": false, 00:13:25.170 "nvme_iov_md": false 00:13:25.170 }, 00:13:25.170 "driver_specific": { 00:13:25.170 "raid": { 00:13:25.170 "uuid": "7492db19-dbae-4391-bbfe-420b44527b47", 00:13:25.170 "strip_size_kb": 64, 00:13:25.170 "state": "online", 00:13:25.170 "raid_level": "raid5f", 00:13:25.170 "superblock": true, 00:13:25.170 "num_base_bdevs": 3, 00:13:25.170 "num_base_bdevs_discovered": 3, 00:13:25.170 "num_base_bdevs_operational": 3, 00:13:25.170 "base_bdevs_list": [ 00:13:25.170 { 00:13:25.170 "name": "NewBaseBdev", 00:13:25.170 "uuid": "07fb41a2-0569-444c-a37c-43e4d8cab894", 00:13:25.170 "is_configured": true, 00:13:25.170 "data_offset": 2048, 00:13:25.170 "data_size": 63488 00:13:25.170 }, 00:13:25.170 { 00:13:25.170 "name": "BaseBdev2", 00:13:25.170 "uuid": "a3b7d39c-6e58-46d6-8682-ff0480ef0779", 00:13:25.170 "is_configured": true, 00:13:25.170 "data_offset": 2048, 00:13:25.170 "data_size": 63488 00:13:25.170 }, 00:13:25.170 { 00:13:25.170 "name": "BaseBdev3", 00:13:25.170 "uuid": "050f0a6b-4c2d-4eba-9a11-a251262c3a76", 00:13:25.170 "is_configured": true, 00:13:25.170 "data_offset": 2048, 00:13:25.170 "data_size": 63488 00:13:25.170 } 00:13:25.170 ] 00:13:25.170 } 00:13:25.170 } 00:13:25.170 }' 00:13:25.170 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:25.437 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:25.437 BaseBdev2 00:13:25.437 BaseBdev3' 00:13:25.437 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.438 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:25.438 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.438 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:25.438 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.438 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.438 01:14:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.438 01:14:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.438 [2024-10-15 01:14:38.140334] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:25.438 [2024-10-15 01:14:38.140401] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.438 [2024-10-15 01:14:38.140496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.438 [2024-10-15 01:14:38.140765] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:25.438 [2024-10-15 01:14:38.140820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 90799 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 90799 ']' 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 90799 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:25.438 01:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90799 00:13:25.698 01:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:25.698 01:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:25.698 01:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90799' 00:13:25.698 killing process with pid 90799 00:13:25.698 01:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 90799 00:13:25.698 [2024-10-15 01:14:38.189105] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:25.698 01:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 90799 00:13:25.698 [2024-10-15 01:14:38.219479] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:25.959 01:14:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:25.959 00:13:25.959 real 0m8.733s 00:13:25.959 user 0m14.968s 00:13:25.959 sys 0m1.770s 00:13:25.959 01:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:25.959 ************************************ 00:13:25.959 END TEST raid5f_state_function_test_sb 00:13:25.959 ************************************ 00:13:25.959 01:14:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.959 01:14:38 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:13:25.959 01:14:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:25.959 01:14:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:25.959 01:14:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:25.959 ************************************ 00:13:25.959 START TEST raid5f_superblock_test 00:13:25.959 ************************************ 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=91403 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 91403 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 91403 ']' 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:25.959 01:14:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.959 [2024-10-15 01:14:38.588495] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:13:25.959 [2024-10-15 01:14:38.588704] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91403 ] 00:13:26.219 [2024-10-15 01:14:38.732529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.219 [2024-10-15 01:14:38.758919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.219 [2024-10-15 01:14:38.802051] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:26.219 [2024-10-15 01:14:38.802171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:26.789 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:26.789 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:26.789 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:26.789 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:26.789 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:26.789 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:26.789 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:26.789 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:26.789 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:26.789 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:26.789 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.790 malloc1 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.790 [2024-10-15 01:14:39.437064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:26.790 [2024-10-15 01:14:39.437189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.790 [2024-10-15 01:14:39.437235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:26.790 [2024-10-15 01:14:39.437277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.790 [2024-10-15 01:14:39.439346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.790 [2024-10-15 01:14:39.439432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:26.790 pt1 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.790 malloc2 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.790 [2024-10-15 01:14:39.469768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:26.790 [2024-10-15 01:14:39.469857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.790 [2024-10-15 01:14:39.469905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:26.790 [2024-10-15 01:14:39.469933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.790 [2024-10-15 01:14:39.471962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.790 [2024-10-15 01:14:39.472029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:26.790 pt2 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.790 malloc3 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.790 [2024-10-15 01:14:39.502419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:26.790 [2024-10-15 01:14:39.502507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.790 [2024-10-15 01:14:39.502556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:26.790 [2024-10-15 01:14:39.502585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.790 [2024-10-15 01:14:39.504660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.790 [2024-10-15 01:14:39.504735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:26.790 pt3 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.790 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.050 [2024-10-15 01:14:39.514465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:27.050 [2024-10-15 01:14:39.516449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:27.050 [2024-10-15 01:14:39.516543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:27.050 [2024-10-15 01:14:39.516743] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:27.050 [2024-10-15 01:14:39.516793] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:27.050 [2024-10-15 01:14:39.517080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:13:27.050 [2024-10-15 01:14:39.517553] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:27.050 [2024-10-15 01:14:39.517573] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:27.050 [2024-10-15 01:14:39.517693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.050 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.050 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:27.050 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.050 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.050 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:27.050 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.050 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.050 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.050 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.050 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.050 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.050 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.050 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.050 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.050 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.050 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.050 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.050 "name": "raid_bdev1", 00:13:27.050 "uuid": "73e22691-1883-43f2-b7a9-eef2764b3225", 00:13:27.050 "strip_size_kb": 64, 00:13:27.050 "state": "online", 00:13:27.050 "raid_level": "raid5f", 00:13:27.050 "superblock": true, 00:13:27.050 "num_base_bdevs": 3, 00:13:27.050 "num_base_bdevs_discovered": 3, 00:13:27.050 "num_base_bdevs_operational": 3, 00:13:27.050 "base_bdevs_list": [ 00:13:27.050 { 00:13:27.050 "name": "pt1", 00:13:27.050 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:27.050 "is_configured": true, 00:13:27.051 "data_offset": 2048, 00:13:27.051 "data_size": 63488 00:13:27.051 }, 00:13:27.051 { 00:13:27.051 "name": "pt2", 00:13:27.051 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:27.051 "is_configured": true, 00:13:27.051 "data_offset": 2048, 00:13:27.051 "data_size": 63488 00:13:27.051 }, 00:13:27.051 { 00:13:27.051 "name": "pt3", 00:13:27.051 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:27.051 "is_configured": true, 00:13:27.051 "data_offset": 2048, 00:13:27.051 "data_size": 63488 00:13:27.051 } 00:13:27.051 ] 00:13:27.051 }' 00:13:27.051 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.051 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.310 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:27.310 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:27.310 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:27.310 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:27.310 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:27.310 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:27.310 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:27.310 01:14:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:27.311 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.311 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.311 [2024-10-15 01:14:39.970520] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:27.311 01:14:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.311 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:27.311 "name": "raid_bdev1", 00:13:27.311 "aliases": [ 00:13:27.311 "73e22691-1883-43f2-b7a9-eef2764b3225" 00:13:27.311 ], 00:13:27.311 "product_name": "Raid Volume", 00:13:27.311 "block_size": 512, 00:13:27.311 "num_blocks": 126976, 00:13:27.311 "uuid": "73e22691-1883-43f2-b7a9-eef2764b3225", 00:13:27.311 "assigned_rate_limits": { 00:13:27.311 "rw_ios_per_sec": 0, 00:13:27.311 "rw_mbytes_per_sec": 0, 00:13:27.311 "r_mbytes_per_sec": 0, 00:13:27.311 "w_mbytes_per_sec": 0 00:13:27.311 }, 00:13:27.311 "claimed": false, 00:13:27.311 "zoned": false, 00:13:27.311 "supported_io_types": { 00:13:27.311 "read": true, 00:13:27.311 "write": true, 00:13:27.311 "unmap": false, 00:13:27.311 "flush": false, 00:13:27.311 "reset": true, 00:13:27.311 "nvme_admin": false, 00:13:27.311 "nvme_io": false, 00:13:27.311 "nvme_io_md": false, 00:13:27.311 "write_zeroes": true, 00:13:27.311 "zcopy": false, 00:13:27.311 "get_zone_info": false, 00:13:27.311 "zone_management": false, 00:13:27.311 "zone_append": false, 00:13:27.311 "compare": false, 00:13:27.311 "compare_and_write": false, 00:13:27.311 "abort": false, 00:13:27.311 "seek_hole": false, 00:13:27.311 "seek_data": false, 00:13:27.311 "copy": false, 00:13:27.311 "nvme_iov_md": false 00:13:27.311 }, 00:13:27.311 "driver_specific": { 00:13:27.311 "raid": { 00:13:27.311 "uuid": "73e22691-1883-43f2-b7a9-eef2764b3225", 00:13:27.311 "strip_size_kb": 64, 00:13:27.311 "state": "online", 00:13:27.311 "raid_level": "raid5f", 00:13:27.311 "superblock": true, 00:13:27.311 "num_base_bdevs": 3, 00:13:27.311 "num_base_bdevs_discovered": 3, 00:13:27.311 "num_base_bdevs_operational": 3, 00:13:27.311 "base_bdevs_list": [ 00:13:27.311 { 00:13:27.311 "name": "pt1", 00:13:27.311 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:27.311 "is_configured": true, 00:13:27.311 "data_offset": 2048, 00:13:27.311 "data_size": 63488 00:13:27.311 }, 00:13:27.311 { 00:13:27.311 "name": "pt2", 00:13:27.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:27.311 "is_configured": true, 00:13:27.311 "data_offset": 2048, 00:13:27.311 "data_size": 63488 00:13:27.311 }, 00:13:27.311 { 00:13:27.311 "name": "pt3", 00:13:27.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:27.311 "is_configured": true, 00:13:27.311 "data_offset": 2048, 00:13:27.311 "data_size": 63488 00:13:27.311 } 00:13:27.311 ] 00:13:27.311 } 00:13:27.311 } 00:13:27.311 }' 00:13:27.311 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:27.652 pt2 00:13:27.652 pt3' 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.652 [2024-10-15 01:14:40.257962] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=73e22691-1883-43f2-b7a9-eef2764b3225 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 73e22691-1883-43f2-b7a9-eef2764b3225 ']' 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.652 [2024-10-15 01:14:40.305709] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:27.652 [2024-10-15 01:14:40.305735] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:27.652 [2024-10-15 01:14:40.305827] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.652 [2024-10-15 01:14:40.305901] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:27.652 [2024-10-15 01:14:40.305913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.652 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.925 [2024-10-15 01:14:40.453478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:27.925 [2024-10-15 01:14:40.455404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:27.925 [2024-10-15 01:14:40.455483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:27.925 [2024-10-15 01:14:40.455552] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:27.925 [2024-10-15 01:14:40.455638] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:27.925 [2024-10-15 01:14:40.455692] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:27.925 [2024-10-15 01:14:40.455755] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:27.925 [2024-10-15 01:14:40.455794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:13:27.925 request: 00:13:27.925 { 00:13:27.925 "name": "raid_bdev1", 00:13:27.925 "raid_level": "raid5f", 00:13:27.925 "base_bdevs": [ 00:13:27.925 "malloc1", 00:13:27.925 "malloc2", 00:13:27.925 "malloc3" 00:13:27.925 ], 00:13:27.925 "strip_size_kb": 64, 00:13:27.925 "superblock": false, 00:13:27.925 "method": "bdev_raid_create", 00:13:27.925 "req_id": 1 00:13:27.925 } 00:13:27.925 Got JSON-RPC error response 00:13:27.925 response: 00:13:27.925 { 00:13:27.925 "code": -17, 00:13:27.925 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:27.925 } 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:27.925 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.926 [2024-10-15 01:14:40.509333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:27.926 [2024-10-15 01:14:40.509416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.926 [2024-10-15 01:14:40.509436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:27.926 [2024-10-15 01:14:40.509446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.926 [2024-10-15 01:14:40.511565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.926 [2024-10-15 01:14:40.511603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:27.926 [2024-10-15 01:14:40.511663] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:27.926 [2024-10-15 01:14:40.511710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:27.926 pt1 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.926 "name": "raid_bdev1", 00:13:27.926 "uuid": "73e22691-1883-43f2-b7a9-eef2764b3225", 00:13:27.926 "strip_size_kb": 64, 00:13:27.926 "state": "configuring", 00:13:27.926 "raid_level": "raid5f", 00:13:27.926 "superblock": true, 00:13:27.926 "num_base_bdevs": 3, 00:13:27.926 "num_base_bdevs_discovered": 1, 00:13:27.926 "num_base_bdevs_operational": 3, 00:13:27.926 "base_bdevs_list": [ 00:13:27.926 { 00:13:27.926 "name": "pt1", 00:13:27.926 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:27.926 "is_configured": true, 00:13:27.926 "data_offset": 2048, 00:13:27.926 "data_size": 63488 00:13:27.926 }, 00:13:27.926 { 00:13:27.926 "name": null, 00:13:27.926 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:27.926 "is_configured": false, 00:13:27.926 "data_offset": 2048, 00:13:27.926 "data_size": 63488 00:13:27.926 }, 00:13:27.926 { 00:13:27.926 "name": null, 00:13:27.926 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:27.926 "is_configured": false, 00:13:27.926 "data_offset": 2048, 00:13:27.926 "data_size": 63488 00:13:27.926 } 00:13:27.926 ] 00:13:27.926 }' 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.926 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.496 [2024-10-15 01:14:40.928679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:28.496 [2024-10-15 01:14:40.928809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.496 [2024-10-15 01:14:40.928849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:28.496 [2024-10-15 01:14:40.928882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.496 [2024-10-15 01:14:40.929351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.496 [2024-10-15 01:14:40.929408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:28.496 [2024-10-15 01:14:40.929508] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:28.496 [2024-10-15 01:14:40.929559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:28.496 pt2 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.496 [2024-10-15 01:14:40.936648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.496 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.497 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.497 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.497 "name": "raid_bdev1", 00:13:28.497 "uuid": "73e22691-1883-43f2-b7a9-eef2764b3225", 00:13:28.497 "strip_size_kb": 64, 00:13:28.497 "state": "configuring", 00:13:28.497 "raid_level": "raid5f", 00:13:28.497 "superblock": true, 00:13:28.497 "num_base_bdevs": 3, 00:13:28.497 "num_base_bdevs_discovered": 1, 00:13:28.497 "num_base_bdevs_operational": 3, 00:13:28.497 "base_bdevs_list": [ 00:13:28.497 { 00:13:28.497 "name": "pt1", 00:13:28.497 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:28.497 "is_configured": true, 00:13:28.497 "data_offset": 2048, 00:13:28.497 "data_size": 63488 00:13:28.497 }, 00:13:28.497 { 00:13:28.497 "name": null, 00:13:28.497 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:28.497 "is_configured": false, 00:13:28.497 "data_offset": 0, 00:13:28.497 "data_size": 63488 00:13:28.497 }, 00:13:28.497 { 00:13:28.497 "name": null, 00:13:28.497 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:28.497 "is_configured": false, 00:13:28.497 "data_offset": 2048, 00:13:28.497 "data_size": 63488 00:13:28.497 } 00:13:28.497 ] 00:13:28.497 }' 00:13:28.497 01:14:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.497 01:14:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.757 [2024-10-15 01:14:41.331983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:28.757 [2024-10-15 01:14:41.332092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.757 [2024-10-15 01:14:41.332120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:28.757 [2024-10-15 01:14:41.332129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.757 [2024-10-15 01:14:41.332580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.757 [2024-10-15 01:14:41.332608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:28.757 [2024-10-15 01:14:41.332692] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:28.757 [2024-10-15 01:14:41.332714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:28.757 pt2 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.757 [2024-10-15 01:14:41.339944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:28.757 [2024-10-15 01:14:41.339988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.757 [2024-10-15 01:14:41.340008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:28.757 [2024-10-15 01:14:41.340016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.757 [2024-10-15 01:14:41.340362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.757 [2024-10-15 01:14:41.340389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:28.757 [2024-10-15 01:14:41.340449] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:28.757 [2024-10-15 01:14:41.340467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:28.757 [2024-10-15 01:14:41.340564] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:28.757 [2024-10-15 01:14:41.340589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:28.757 [2024-10-15 01:14:41.340815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:28.757 [2024-10-15 01:14:41.341212] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:28.757 [2024-10-15 01:14:41.341227] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:13:28.757 [2024-10-15 01:14:41.341330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.757 pt3 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.757 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.757 "name": "raid_bdev1", 00:13:28.757 "uuid": "73e22691-1883-43f2-b7a9-eef2764b3225", 00:13:28.758 "strip_size_kb": 64, 00:13:28.758 "state": "online", 00:13:28.758 "raid_level": "raid5f", 00:13:28.758 "superblock": true, 00:13:28.758 "num_base_bdevs": 3, 00:13:28.758 "num_base_bdevs_discovered": 3, 00:13:28.758 "num_base_bdevs_operational": 3, 00:13:28.758 "base_bdevs_list": [ 00:13:28.758 { 00:13:28.758 "name": "pt1", 00:13:28.758 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:28.758 "is_configured": true, 00:13:28.758 "data_offset": 2048, 00:13:28.758 "data_size": 63488 00:13:28.758 }, 00:13:28.758 { 00:13:28.758 "name": "pt2", 00:13:28.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:28.758 "is_configured": true, 00:13:28.758 "data_offset": 2048, 00:13:28.758 "data_size": 63488 00:13:28.758 }, 00:13:28.758 { 00:13:28.758 "name": "pt3", 00:13:28.758 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:28.758 "is_configured": true, 00:13:28.758 "data_offset": 2048, 00:13:28.758 "data_size": 63488 00:13:28.758 } 00:13:28.758 ] 00:13:28.758 }' 00:13:28.758 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.758 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.018 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:29.018 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:29.018 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:29.018 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:29.018 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:29.018 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:29.018 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:29.018 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.018 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.018 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:29.018 [2024-10-15 01:14:41.739564] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:29.278 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.278 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:29.278 "name": "raid_bdev1", 00:13:29.278 "aliases": [ 00:13:29.278 "73e22691-1883-43f2-b7a9-eef2764b3225" 00:13:29.278 ], 00:13:29.278 "product_name": "Raid Volume", 00:13:29.278 "block_size": 512, 00:13:29.278 "num_blocks": 126976, 00:13:29.278 "uuid": "73e22691-1883-43f2-b7a9-eef2764b3225", 00:13:29.278 "assigned_rate_limits": { 00:13:29.278 "rw_ios_per_sec": 0, 00:13:29.278 "rw_mbytes_per_sec": 0, 00:13:29.278 "r_mbytes_per_sec": 0, 00:13:29.278 "w_mbytes_per_sec": 0 00:13:29.278 }, 00:13:29.278 "claimed": false, 00:13:29.278 "zoned": false, 00:13:29.278 "supported_io_types": { 00:13:29.278 "read": true, 00:13:29.278 "write": true, 00:13:29.278 "unmap": false, 00:13:29.278 "flush": false, 00:13:29.278 "reset": true, 00:13:29.278 "nvme_admin": false, 00:13:29.278 "nvme_io": false, 00:13:29.278 "nvme_io_md": false, 00:13:29.278 "write_zeroes": true, 00:13:29.278 "zcopy": false, 00:13:29.278 "get_zone_info": false, 00:13:29.278 "zone_management": false, 00:13:29.278 "zone_append": false, 00:13:29.278 "compare": false, 00:13:29.278 "compare_and_write": false, 00:13:29.278 "abort": false, 00:13:29.278 "seek_hole": false, 00:13:29.278 "seek_data": false, 00:13:29.278 "copy": false, 00:13:29.278 "nvme_iov_md": false 00:13:29.278 }, 00:13:29.278 "driver_specific": { 00:13:29.278 "raid": { 00:13:29.278 "uuid": "73e22691-1883-43f2-b7a9-eef2764b3225", 00:13:29.278 "strip_size_kb": 64, 00:13:29.278 "state": "online", 00:13:29.278 "raid_level": "raid5f", 00:13:29.278 "superblock": true, 00:13:29.278 "num_base_bdevs": 3, 00:13:29.278 "num_base_bdevs_discovered": 3, 00:13:29.278 "num_base_bdevs_operational": 3, 00:13:29.278 "base_bdevs_list": [ 00:13:29.278 { 00:13:29.278 "name": "pt1", 00:13:29.278 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:29.278 "is_configured": true, 00:13:29.278 "data_offset": 2048, 00:13:29.278 "data_size": 63488 00:13:29.278 }, 00:13:29.278 { 00:13:29.278 "name": "pt2", 00:13:29.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:29.278 "is_configured": true, 00:13:29.278 "data_offset": 2048, 00:13:29.278 "data_size": 63488 00:13:29.278 }, 00:13:29.278 { 00:13:29.278 "name": "pt3", 00:13:29.278 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:29.278 "is_configured": true, 00:13:29.278 "data_offset": 2048, 00:13:29.278 "data_size": 63488 00:13:29.278 } 00:13:29.278 ] 00:13:29.278 } 00:13:29.278 } 00:13:29.278 }' 00:13:29.278 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:29.278 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:29.278 pt2 00:13:29.278 pt3' 00:13:29.278 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.278 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:29.278 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.278 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:29.278 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.278 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.278 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.279 [2024-10-15 01:14:41.975063] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:29.279 01:14:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 73e22691-1883-43f2-b7a9-eef2764b3225 '!=' 73e22691-1883-43f2-b7a9-eef2764b3225 ']' 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.539 [2024-10-15 01:14:42.022853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.539 "name": "raid_bdev1", 00:13:29.539 "uuid": "73e22691-1883-43f2-b7a9-eef2764b3225", 00:13:29.539 "strip_size_kb": 64, 00:13:29.539 "state": "online", 00:13:29.539 "raid_level": "raid5f", 00:13:29.539 "superblock": true, 00:13:29.539 "num_base_bdevs": 3, 00:13:29.539 "num_base_bdevs_discovered": 2, 00:13:29.539 "num_base_bdevs_operational": 2, 00:13:29.539 "base_bdevs_list": [ 00:13:29.539 { 00:13:29.539 "name": null, 00:13:29.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.539 "is_configured": false, 00:13:29.539 "data_offset": 0, 00:13:29.539 "data_size": 63488 00:13:29.539 }, 00:13:29.539 { 00:13:29.539 "name": "pt2", 00:13:29.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:29.539 "is_configured": true, 00:13:29.539 "data_offset": 2048, 00:13:29.539 "data_size": 63488 00:13:29.539 }, 00:13:29.539 { 00:13:29.539 "name": "pt3", 00:13:29.539 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:29.539 "is_configured": true, 00:13:29.539 "data_offset": 2048, 00:13:29.539 "data_size": 63488 00:13:29.539 } 00:13:29.539 ] 00:13:29.539 }' 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.539 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.799 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:29.799 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.799 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.799 [2024-10-15 01:14:42.494026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:29.799 [2024-10-15 01:14:42.494109] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:29.799 [2024-10-15 01:14:42.494216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:29.799 [2024-10-15 01:14:42.494313] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:29.799 [2024-10-15 01:14:42.494364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:13:29.799 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.799 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.799 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:29.799 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.799 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.799 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.060 [2024-10-15 01:14:42.577850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:30.060 [2024-10-15 01:14:42.577933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.060 [2024-10-15 01:14:42.577971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:30.060 [2024-10-15 01:14:42.577980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.060 [2024-10-15 01:14:42.580068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.060 [2024-10-15 01:14:42.580105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:30.060 [2024-10-15 01:14:42.580188] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:30.060 [2024-10-15 01:14:42.580234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:30.060 pt2 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.060 "name": "raid_bdev1", 00:13:30.060 "uuid": "73e22691-1883-43f2-b7a9-eef2764b3225", 00:13:30.060 "strip_size_kb": 64, 00:13:30.060 "state": "configuring", 00:13:30.060 "raid_level": "raid5f", 00:13:30.060 "superblock": true, 00:13:30.060 "num_base_bdevs": 3, 00:13:30.060 "num_base_bdevs_discovered": 1, 00:13:30.060 "num_base_bdevs_operational": 2, 00:13:30.060 "base_bdevs_list": [ 00:13:30.060 { 00:13:30.060 "name": null, 00:13:30.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.060 "is_configured": false, 00:13:30.060 "data_offset": 2048, 00:13:30.060 "data_size": 63488 00:13:30.060 }, 00:13:30.060 { 00:13:30.060 "name": "pt2", 00:13:30.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:30.060 "is_configured": true, 00:13:30.060 "data_offset": 2048, 00:13:30.060 "data_size": 63488 00:13:30.060 }, 00:13:30.060 { 00:13:30.060 "name": null, 00:13:30.060 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:30.060 "is_configured": false, 00:13:30.060 "data_offset": 2048, 00:13:30.060 "data_size": 63488 00:13:30.060 } 00:13:30.060 ] 00:13:30.060 }' 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.060 01:14:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.629 [2024-10-15 01:14:43.057077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:30.629 [2024-10-15 01:14:43.057202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.629 [2024-10-15 01:14:43.057244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:30.629 [2024-10-15 01:14:43.057279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.629 [2024-10-15 01:14:43.057705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.629 [2024-10-15 01:14:43.057761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:30.629 [2024-10-15 01:14:43.057866] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:30.629 [2024-10-15 01:14:43.057915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:30.629 [2024-10-15 01:14:43.058031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:30.629 [2024-10-15 01:14:43.058066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:30.629 [2024-10-15 01:14:43.058329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:30.629 [2024-10-15 01:14:43.058835] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:30.629 [2024-10-15 01:14:43.058887] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:13:30.629 [2024-10-15 01:14:43.059158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.629 pt3 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.629 "name": "raid_bdev1", 00:13:30.629 "uuid": "73e22691-1883-43f2-b7a9-eef2764b3225", 00:13:30.629 "strip_size_kb": 64, 00:13:30.629 "state": "online", 00:13:30.629 "raid_level": "raid5f", 00:13:30.629 "superblock": true, 00:13:30.629 "num_base_bdevs": 3, 00:13:30.629 "num_base_bdevs_discovered": 2, 00:13:30.629 "num_base_bdevs_operational": 2, 00:13:30.629 "base_bdevs_list": [ 00:13:30.629 { 00:13:30.629 "name": null, 00:13:30.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.629 "is_configured": false, 00:13:30.629 "data_offset": 2048, 00:13:30.629 "data_size": 63488 00:13:30.629 }, 00:13:30.629 { 00:13:30.629 "name": "pt2", 00:13:30.629 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:30.629 "is_configured": true, 00:13:30.629 "data_offset": 2048, 00:13:30.629 "data_size": 63488 00:13:30.629 }, 00:13:30.629 { 00:13:30.629 "name": "pt3", 00:13:30.629 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:30.629 "is_configured": true, 00:13:30.629 "data_offset": 2048, 00:13:30.629 "data_size": 63488 00:13:30.629 } 00:13:30.629 ] 00:13:30.629 }' 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.629 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.889 [2024-10-15 01:14:43.496338] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:30.889 [2024-10-15 01:14:43.496368] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.889 [2024-10-15 01:14:43.496452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.889 [2024-10-15 01:14:43.496518] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.889 [2024-10-15 01:14:43.496531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.889 [2024-10-15 01:14:43.568211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:30.889 [2024-10-15 01:14:43.568332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.889 [2024-10-15 01:14:43.568375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:30.889 [2024-10-15 01:14:43.568416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.889 [2024-10-15 01:14:43.570787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.889 [2024-10-15 01:14:43.570862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:30.889 [2024-10-15 01:14:43.570957] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:30.889 [2024-10-15 01:14:43.571039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:30.889 [2024-10-15 01:14:43.571209] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:30.889 [2024-10-15 01:14:43.571281] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:30.889 [2024-10-15 01:14:43.571383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:13:30.889 [2024-10-15 01:14:43.571472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:30.889 pt1 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.889 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.148 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.148 "name": "raid_bdev1", 00:13:31.148 "uuid": "73e22691-1883-43f2-b7a9-eef2764b3225", 00:13:31.148 "strip_size_kb": 64, 00:13:31.148 "state": "configuring", 00:13:31.149 "raid_level": "raid5f", 00:13:31.149 "superblock": true, 00:13:31.149 "num_base_bdevs": 3, 00:13:31.149 "num_base_bdevs_discovered": 1, 00:13:31.149 "num_base_bdevs_operational": 2, 00:13:31.149 "base_bdevs_list": [ 00:13:31.149 { 00:13:31.149 "name": null, 00:13:31.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.149 "is_configured": false, 00:13:31.149 "data_offset": 2048, 00:13:31.149 "data_size": 63488 00:13:31.149 }, 00:13:31.149 { 00:13:31.149 "name": "pt2", 00:13:31.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:31.149 "is_configured": true, 00:13:31.149 "data_offset": 2048, 00:13:31.149 "data_size": 63488 00:13:31.149 }, 00:13:31.149 { 00:13:31.149 "name": null, 00:13:31.149 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:31.149 "is_configured": false, 00:13:31.149 "data_offset": 2048, 00:13:31.149 "data_size": 63488 00:13:31.149 } 00:13:31.149 ] 00:13:31.149 }' 00:13:31.149 01:14:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.149 01:14:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.408 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.409 [2024-10-15 01:14:44.047409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:31.409 [2024-10-15 01:14:44.047520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.409 [2024-10-15 01:14:44.047544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:31.409 [2024-10-15 01:14:44.047555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.409 [2024-10-15 01:14:44.048005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.409 [2024-10-15 01:14:44.048031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:31.409 [2024-10-15 01:14:44.048106] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:31.409 [2024-10-15 01:14:44.048137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:31.409 [2024-10-15 01:14:44.048254] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:13:31.409 [2024-10-15 01:14:44.048280] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:31.409 [2024-10-15 01:14:44.048531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:13:31.409 [2024-10-15 01:14:44.049014] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:13:31.409 [2024-10-15 01:14:44.049025] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:13:31.409 [2024-10-15 01:14:44.049205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.409 pt3 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.409 "name": "raid_bdev1", 00:13:31.409 "uuid": "73e22691-1883-43f2-b7a9-eef2764b3225", 00:13:31.409 "strip_size_kb": 64, 00:13:31.409 "state": "online", 00:13:31.409 "raid_level": "raid5f", 00:13:31.409 "superblock": true, 00:13:31.409 "num_base_bdevs": 3, 00:13:31.409 "num_base_bdevs_discovered": 2, 00:13:31.409 "num_base_bdevs_operational": 2, 00:13:31.409 "base_bdevs_list": [ 00:13:31.409 { 00:13:31.409 "name": null, 00:13:31.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.409 "is_configured": false, 00:13:31.409 "data_offset": 2048, 00:13:31.409 "data_size": 63488 00:13:31.409 }, 00:13:31.409 { 00:13:31.409 "name": "pt2", 00:13:31.409 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:31.409 "is_configured": true, 00:13:31.409 "data_offset": 2048, 00:13:31.409 "data_size": 63488 00:13:31.409 }, 00:13:31.409 { 00:13:31.409 "name": "pt3", 00:13:31.409 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:31.409 "is_configured": true, 00:13:31.409 "data_offset": 2048, 00:13:31.409 "data_size": 63488 00:13:31.409 } 00:13:31.409 ] 00:13:31.409 }' 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.409 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:31.979 [2024-10-15 01:14:44.594726] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 73e22691-1883-43f2-b7a9-eef2764b3225 '!=' 73e22691-1883-43f2-b7a9-eef2764b3225 ']' 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 91403 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 91403 ']' 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 91403 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91403 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91403' 00:13:31.979 killing process with pid 91403 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 91403 00:13:31.979 [2024-10-15 01:14:44.682872] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.979 [2024-10-15 01:14:44.683024] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.979 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 91403 00:13:31.979 [2024-10-15 01:14:44.683142] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.979 [2024-10-15 01:14:44.683206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:13:32.240 [2024-10-15 01:14:44.716706] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:32.240 01:14:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:32.240 00:13:32.240 real 0m6.426s 00:13:32.240 user 0m10.841s 00:13:32.240 sys 0m1.316s 00:13:32.240 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.240 ************************************ 00:13:32.240 END TEST raid5f_superblock_test 00:13:32.240 ************************************ 00:13:32.240 01:14:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.500 01:14:44 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:13:32.500 01:14:44 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:13:32.500 01:14:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:32.500 01:14:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.500 01:14:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:32.500 ************************************ 00:13:32.500 START TEST raid5f_rebuild_test 00:13:32.500 ************************************ 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=91830 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 91830 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 91830 ']' 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:32.500 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.500 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:32.500 Zero copy mechanism will not be used. 00:13:32.500 [2024-10-15 01:14:45.094088] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:13:32.500 [2024-10-15 01:14:45.094234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91830 ] 00:13:32.500 [2024-10-15 01:14:45.221420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.760 [2024-10-15 01:14:45.247078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.760 [2024-10-15 01:14:45.290220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.760 [2024-10-15 01:14:45.290259] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.330 BaseBdev1_malloc 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.330 [2024-10-15 01:14:45.945050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:33.330 [2024-10-15 01:14:45.945196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.330 [2024-10-15 01:14:45.945240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:33.330 [2024-10-15 01:14:45.945275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.330 [2024-10-15 01:14:45.947406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.330 [2024-10-15 01:14:45.947474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:33.330 BaseBdev1 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.330 BaseBdev2_malloc 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.330 [2024-10-15 01:14:45.973814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:33.330 [2024-10-15 01:14:45.973906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.330 [2024-10-15 01:14:45.973943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:33.330 [2024-10-15 01:14:45.973953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.330 [2024-10-15 01:14:45.976072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.330 [2024-10-15 01:14:45.976115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:33.330 BaseBdev2 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.330 BaseBdev3_malloc 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.330 01:14:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.330 [2024-10-15 01:14:46.002490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:33.330 [2024-10-15 01:14:46.002543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.330 [2024-10-15 01:14:46.002565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:33.330 [2024-10-15 01:14:46.002574] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.330 [2024-10-15 01:14:46.004641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.330 [2024-10-15 01:14:46.004677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:33.330 BaseBdev3 00:13:33.330 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.330 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:33.331 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.331 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.331 spare_malloc 00:13:33.331 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.331 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:33.331 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.331 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.331 spare_delay 00:13:33.331 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.331 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:33.331 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.331 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.331 [2024-10-15 01:14:46.052265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:33.331 [2024-10-15 01:14:46.052376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.331 [2024-10-15 01:14:46.052411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:33.331 [2024-10-15 01:14:46.052420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.591 [2024-10-15 01:14:46.054653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.591 [2024-10-15 01:14:46.054690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:33.591 spare 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.591 [2024-10-15 01:14:46.064320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:33.591 [2024-10-15 01:14:46.066163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:33.591 [2024-10-15 01:14:46.066285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:33.591 [2024-10-15 01:14:46.066384] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:33.591 [2024-10-15 01:14:46.066398] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:33.591 [2024-10-15 01:14:46.066645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:33.591 [2024-10-15 01:14:46.067077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:33.591 [2024-10-15 01:14:46.067088] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:33.591 [2024-10-15 01:14:46.067213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.591 "name": "raid_bdev1", 00:13:33.591 "uuid": "0d96fed2-a20d-49fa-8963-be9d549c8731", 00:13:33.591 "strip_size_kb": 64, 00:13:33.591 "state": "online", 00:13:33.591 "raid_level": "raid5f", 00:13:33.591 "superblock": false, 00:13:33.591 "num_base_bdevs": 3, 00:13:33.591 "num_base_bdevs_discovered": 3, 00:13:33.591 "num_base_bdevs_operational": 3, 00:13:33.591 "base_bdevs_list": [ 00:13:33.591 { 00:13:33.591 "name": "BaseBdev1", 00:13:33.591 "uuid": "66a7b9f8-265c-5c87-856d-e10c75430c4e", 00:13:33.591 "is_configured": true, 00:13:33.591 "data_offset": 0, 00:13:33.591 "data_size": 65536 00:13:33.591 }, 00:13:33.591 { 00:13:33.591 "name": "BaseBdev2", 00:13:33.591 "uuid": "c806d871-97b5-5990-a258-42bee243ddb5", 00:13:33.591 "is_configured": true, 00:13:33.591 "data_offset": 0, 00:13:33.591 "data_size": 65536 00:13:33.591 }, 00:13:33.591 { 00:13:33.591 "name": "BaseBdev3", 00:13:33.591 "uuid": "9c898aee-205d-529f-87d0-d3ce46ff2ebf", 00:13:33.591 "is_configured": true, 00:13:33.591 "data_offset": 0, 00:13:33.591 "data_size": 65536 00:13:33.591 } 00:13:33.591 ] 00:13:33.591 }' 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.591 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:33.851 [2024-10-15 01:14:46.464203] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:33.851 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:34.111 [2024-10-15 01:14:46.719564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:13:34.111 /dev/nbd0 00:13:34.111 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:34.111 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:34.111 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:34.111 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:34.111 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:34.111 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:34.111 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:34.111 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:34.111 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:34.111 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:34.112 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.112 1+0 records in 00:13:34.112 1+0 records out 00:13:34.112 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239387 s, 17.1 MB/s 00:13:34.112 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.112 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:34.112 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.112 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:34.112 01:14:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:34.112 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.112 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:34.112 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:34.112 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:34.112 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:34.112 01:14:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:13:34.372 512+0 records in 00:13:34.372 512+0 records out 00:13:34.372 67108864 bytes (67 MB, 64 MiB) copied, 0.286728 s, 234 MB/s 00:13:34.372 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:34.372 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:34.372 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:34.372 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:34.372 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:34.372 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.372 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:34.632 [2024-10-15 01:14:47.269089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.632 [2024-10-15 01:14:47.307096] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.632 01:14:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.891 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.891 "name": "raid_bdev1", 00:13:34.891 "uuid": "0d96fed2-a20d-49fa-8963-be9d549c8731", 00:13:34.891 "strip_size_kb": 64, 00:13:34.891 "state": "online", 00:13:34.891 "raid_level": "raid5f", 00:13:34.891 "superblock": false, 00:13:34.891 "num_base_bdevs": 3, 00:13:34.891 "num_base_bdevs_discovered": 2, 00:13:34.891 "num_base_bdevs_operational": 2, 00:13:34.891 "base_bdevs_list": [ 00:13:34.891 { 00:13:34.891 "name": null, 00:13:34.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.891 "is_configured": false, 00:13:34.891 "data_offset": 0, 00:13:34.891 "data_size": 65536 00:13:34.891 }, 00:13:34.891 { 00:13:34.891 "name": "BaseBdev2", 00:13:34.891 "uuid": "c806d871-97b5-5990-a258-42bee243ddb5", 00:13:34.891 "is_configured": true, 00:13:34.891 "data_offset": 0, 00:13:34.891 "data_size": 65536 00:13:34.891 }, 00:13:34.891 { 00:13:34.891 "name": "BaseBdev3", 00:13:34.891 "uuid": "9c898aee-205d-529f-87d0-d3ce46ff2ebf", 00:13:34.891 "is_configured": true, 00:13:34.892 "data_offset": 0, 00:13:34.892 "data_size": 65536 00:13:34.892 } 00:13:34.892 ] 00:13:34.892 }' 00:13:34.892 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.892 01:14:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.151 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:35.151 01:14:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.151 01:14:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.151 [2024-10-15 01:14:47.758351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.151 [2024-10-15 01:14:47.763058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027cd0 00:13:35.151 01:14:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.151 01:14:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:35.151 [2024-10-15 01:14:47.765421] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:36.088 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.088 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.088 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.088 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.088 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.088 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.088 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.088 01:14:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.088 01:14:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.088 01:14:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.348 "name": "raid_bdev1", 00:13:36.348 "uuid": "0d96fed2-a20d-49fa-8963-be9d549c8731", 00:13:36.348 "strip_size_kb": 64, 00:13:36.348 "state": "online", 00:13:36.348 "raid_level": "raid5f", 00:13:36.348 "superblock": false, 00:13:36.348 "num_base_bdevs": 3, 00:13:36.348 "num_base_bdevs_discovered": 3, 00:13:36.348 "num_base_bdevs_operational": 3, 00:13:36.348 "process": { 00:13:36.348 "type": "rebuild", 00:13:36.348 "target": "spare", 00:13:36.348 "progress": { 00:13:36.348 "blocks": 20480, 00:13:36.348 "percent": 15 00:13:36.348 } 00:13:36.348 }, 00:13:36.348 "base_bdevs_list": [ 00:13:36.348 { 00:13:36.348 "name": "spare", 00:13:36.348 "uuid": "70d1041e-fb9f-5f1e-9bb4-75e471243030", 00:13:36.348 "is_configured": true, 00:13:36.348 "data_offset": 0, 00:13:36.348 "data_size": 65536 00:13:36.348 }, 00:13:36.348 { 00:13:36.348 "name": "BaseBdev2", 00:13:36.348 "uuid": "c806d871-97b5-5990-a258-42bee243ddb5", 00:13:36.348 "is_configured": true, 00:13:36.348 "data_offset": 0, 00:13:36.348 "data_size": 65536 00:13:36.348 }, 00:13:36.348 { 00:13:36.348 "name": "BaseBdev3", 00:13:36.348 "uuid": "9c898aee-205d-529f-87d0-d3ce46ff2ebf", 00:13:36.348 "is_configured": true, 00:13:36.348 "data_offset": 0, 00:13:36.348 "data_size": 65536 00:13:36.348 } 00:13:36.348 ] 00:13:36.348 }' 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.348 [2024-10-15 01:14:48.925852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:36.348 [2024-10-15 01:14:48.974026] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:36.348 [2024-10-15 01:14:48.974096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.348 [2024-10-15 01:14:48.974114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:36.348 [2024-10-15 01:14:48.974124] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.348 01:14:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.348 01:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.348 01:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.348 "name": "raid_bdev1", 00:13:36.348 "uuid": "0d96fed2-a20d-49fa-8963-be9d549c8731", 00:13:36.348 "strip_size_kb": 64, 00:13:36.348 "state": "online", 00:13:36.348 "raid_level": "raid5f", 00:13:36.348 "superblock": false, 00:13:36.348 "num_base_bdevs": 3, 00:13:36.348 "num_base_bdevs_discovered": 2, 00:13:36.348 "num_base_bdevs_operational": 2, 00:13:36.348 "base_bdevs_list": [ 00:13:36.348 { 00:13:36.348 "name": null, 00:13:36.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.348 "is_configured": false, 00:13:36.348 "data_offset": 0, 00:13:36.348 "data_size": 65536 00:13:36.348 }, 00:13:36.348 { 00:13:36.348 "name": "BaseBdev2", 00:13:36.348 "uuid": "c806d871-97b5-5990-a258-42bee243ddb5", 00:13:36.348 "is_configured": true, 00:13:36.348 "data_offset": 0, 00:13:36.348 "data_size": 65536 00:13:36.348 }, 00:13:36.348 { 00:13:36.348 "name": "BaseBdev3", 00:13:36.348 "uuid": "9c898aee-205d-529f-87d0-d3ce46ff2ebf", 00:13:36.348 "is_configured": true, 00:13:36.348 "data_offset": 0, 00:13:36.348 "data_size": 65536 00:13:36.348 } 00:13:36.348 ] 00:13:36.348 }' 00:13:36.348 01:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.348 01:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.918 01:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:36.918 01:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.918 01:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:36.918 01:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:36.918 01:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.918 01:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.918 01:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.918 01:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.918 01:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.918 01:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.918 01:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.918 "name": "raid_bdev1", 00:13:36.918 "uuid": "0d96fed2-a20d-49fa-8963-be9d549c8731", 00:13:36.918 "strip_size_kb": 64, 00:13:36.918 "state": "online", 00:13:36.918 "raid_level": "raid5f", 00:13:36.918 "superblock": false, 00:13:36.918 "num_base_bdevs": 3, 00:13:36.918 "num_base_bdevs_discovered": 2, 00:13:36.918 "num_base_bdevs_operational": 2, 00:13:36.918 "base_bdevs_list": [ 00:13:36.918 { 00:13:36.918 "name": null, 00:13:36.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.918 "is_configured": false, 00:13:36.918 "data_offset": 0, 00:13:36.918 "data_size": 65536 00:13:36.918 }, 00:13:36.918 { 00:13:36.918 "name": "BaseBdev2", 00:13:36.918 "uuid": "c806d871-97b5-5990-a258-42bee243ddb5", 00:13:36.918 "is_configured": true, 00:13:36.918 "data_offset": 0, 00:13:36.918 "data_size": 65536 00:13:36.918 }, 00:13:36.918 { 00:13:36.918 "name": "BaseBdev3", 00:13:36.918 "uuid": "9c898aee-205d-529f-87d0-d3ce46ff2ebf", 00:13:36.918 "is_configured": true, 00:13:36.918 "data_offset": 0, 00:13:36.918 "data_size": 65536 00:13:36.918 } 00:13:36.918 ] 00:13:36.918 }' 00:13:36.918 01:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.918 01:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:36.918 01:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.918 01:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:36.918 01:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:36.918 01:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.918 01:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.918 [2024-10-15 01:14:49.591487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:36.918 [2024-10-15 01:14:49.596265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:13:36.918 01:14:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.918 01:14:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:36.918 [2024-10-15 01:14:49.598488] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.301 "name": "raid_bdev1", 00:13:38.301 "uuid": "0d96fed2-a20d-49fa-8963-be9d549c8731", 00:13:38.301 "strip_size_kb": 64, 00:13:38.301 "state": "online", 00:13:38.301 "raid_level": "raid5f", 00:13:38.301 "superblock": false, 00:13:38.301 "num_base_bdevs": 3, 00:13:38.301 "num_base_bdevs_discovered": 3, 00:13:38.301 "num_base_bdevs_operational": 3, 00:13:38.301 "process": { 00:13:38.301 "type": "rebuild", 00:13:38.301 "target": "spare", 00:13:38.301 "progress": { 00:13:38.301 "blocks": 20480, 00:13:38.301 "percent": 15 00:13:38.301 } 00:13:38.301 }, 00:13:38.301 "base_bdevs_list": [ 00:13:38.301 { 00:13:38.301 "name": "spare", 00:13:38.301 "uuid": "70d1041e-fb9f-5f1e-9bb4-75e471243030", 00:13:38.301 "is_configured": true, 00:13:38.301 "data_offset": 0, 00:13:38.301 "data_size": 65536 00:13:38.301 }, 00:13:38.301 { 00:13:38.301 "name": "BaseBdev2", 00:13:38.301 "uuid": "c806d871-97b5-5990-a258-42bee243ddb5", 00:13:38.301 "is_configured": true, 00:13:38.301 "data_offset": 0, 00:13:38.301 "data_size": 65536 00:13:38.301 }, 00:13:38.301 { 00:13:38.301 "name": "BaseBdev3", 00:13:38.301 "uuid": "9c898aee-205d-529f-87d0-d3ce46ff2ebf", 00:13:38.301 "is_configured": true, 00:13:38.301 "data_offset": 0, 00:13:38.301 "data_size": 65536 00:13:38.301 } 00:13:38.301 ] 00:13:38.301 }' 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=442 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.301 "name": "raid_bdev1", 00:13:38.301 "uuid": "0d96fed2-a20d-49fa-8963-be9d549c8731", 00:13:38.301 "strip_size_kb": 64, 00:13:38.301 "state": "online", 00:13:38.301 "raid_level": "raid5f", 00:13:38.301 "superblock": false, 00:13:38.301 "num_base_bdevs": 3, 00:13:38.301 "num_base_bdevs_discovered": 3, 00:13:38.301 "num_base_bdevs_operational": 3, 00:13:38.301 "process": { 00:13:38.301 "type": "rebuild", 00:13:38.301 "target": "spare", 00:13:38.301 "progress": { 00:13:38.301 "blocks": 22528, 00:13:38.301 "percent": 17 00:13:38.301 } 00:13:38.301 }, 00:13:38.301 "base_bdevs_list": [ 00:13:38.301 { 00:13:38.301 "name": "spare", 00:13:38.301 "uuid": "70d1041e-fb9f-5f1e-9bb4-75e471243030", 00:13:38.301 "is_configured": true, 00:13:38.301 "data_offset": 0, 00:13:38.301 "data_size": 65536 00:13:38.301 }, 00:13:38.301 { 00:13:38.301 "name": "BaseBdev2", 00:13:38.301 "uuid": "c806d871-97b5-5990-a258-42bee243ddb5", 00:13:38.301 "is_configured": true, 00:13:38.301 "data_offset": 0, 00:13:38.301 "data_size": 65536 00:13:38.301 }, 00:13:38.301 { 00:13:38.301 "name": "BaseBdev3", 00:13:38.301 "uuid": "9c898aee-205d-529f-87d0-d3ce46ff2ebf", 00:13:38.301 "is_configured": true, 00:13:38.301 "data_offset": 0, 00:13:38.301 "data_size": 65536 00:13:38.301 } 00:13:38.301 ] 00:13:38.301 }' 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.301 01:14:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:39.240 01:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.240 01:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.240 01:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.240 01:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.240 01:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.240 01:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.240 01:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.240 01:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.240 01:14:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.240 01:14:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.240 01:14:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.240 01:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.240 "name": "raid_bdev1", 00:13:39.241 "uuid": "0d96fed2-a20d-49fa-8963-be9d549c8731", 00:13:39.241 "strip_size_kb": 64, 00:13:39.241 "state": "online", 00:13:39.241 "raid_level": "raid5f", 00:13:39.241 "superblock": false, 00:13:39.241 "num_base_bdevs": 3, 00:13:39.241 "num_base_bdevs_discovered": 3, 00:13:39.241 "num_base_bdevs_operational": 3, 00:13:39.241 "process": { 00:13:39.241 "type": "rebuild", 00:13:39.241 "target": "spare", 00:13:39.241 "progress": { 00:13:39.241 "blocks": 45056, 00:13:39.241 "percent": 34 00:13:39.241 } 00:13:39.241 }, 00:13:39.241 "base_bdevs_list": [ 00:13:39.241 { 00:13:39.241 "name": "spare", 00:13:39.241 "uuid": "70d1041e-fb9f-5f1e-9bb4-75e471243030", 00:13:39.241 "is_configured": true, 00:13:39.241 "data_offset": 0, 00:13:39.241 "data_size": 65536 00:13:39.241 }, 00:13:39.241 { 00:13:39.241 "name": "BaseBdev2", 00:13:39.241 "uuid": "c806d871-97b5-5990-a258-42bee243ddb5", 00:13:39.241 "is_configured": true, 00:13:39.241 "data_offset": 0, 00:13:39.241 "data_size": 65536 00:13:39.241 }, 00:13:39.241 { 00:13:39.241 "name": "BaseBdev3", 00:13:39.241 "uuid": "9c898aee-205d-529f-87d0-d3ce46ff2ebf", 00:13:39.241 "is_configured": true, 00:13:39.241 "data_offset": 0, 00:13:39.241 "data_size": 65536 00:13:39.241 } 00:13:39.241 ] 00:13:39.241 }' 00:13:39.241 01:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.500 01:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.500 01:14:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.500 01:14:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.500 01:14:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:40.441 01:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.441 01:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.441 01:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.441 01:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.441 01:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.441 01:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.441 01:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.441 01:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.441 01:14:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.441 01:14:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.441 01:14:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.441 01:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.441 "name": "raid_bdev1", 00:13:40.441 "uuid": "0d96fed2-a20d-49fa-8963-be9d549c8731", 00:13:40.441 "strip_size_kb": 64, 00:13:40.441 "state": "online", 00:13:40.441 "raid_level": "raid5f", 00:13:40.441 "superblock": false, 00:13:40.441 "num_base_bdevs": 3, 00:13:40.441 "num_base_bdevs_discovered": 3, 00:13:40.441 "num_base_bdevs_operational": 3, 00:13:40.441 "process": { 00:13:40.441 "type": "rebuild", 00:13:40.441 "target": "spare", 00:13:40.441 "progress": { 00:13:40.441 "blocks": 69632, 00:13:40.441 "percent": 53 00:13:40.441 } 00:13:40.441 }, 00:13:40.441 "base_bdevs_list": [ 00:13:40.441 { 00:13:40.441 "name": "spare", 00:13:40.441 "uuid": "70d1041e-fb9f-5f1e-9bb4-75e471243030", 00:13:40.441 "is_configured": true, 00:13:40.441 "data_offset": 0, 00:13:40.441 "data_size": 65536 00:13:40.441 }, 00:13:40.441 { 00:13:40.441 "name": "BaseBdev2", 00:13:40.441 "uuid": "c806d871-97b5-5990-a258-42bee243ddb5", 00:13:40.441 "is_configured": true, 00:13:40.441 "data_offset": 0, 00:13:40.441 "data_size": 65536 00:13:40.441 }, 00:13:40.441 { 00:13:40.441 "name": "BaseBdev3", 00:13:40.441 "uuid": "9c898aee-205d-529f-87d0-d3ce46ff2ebf", 00:13:40.441 "is_configured": true, 00:13:40.441 "data_offset": 0, 00:13:40.441 "data_size": 65536 00:13:40.441 } 00:13:40.441 ] 00:13:40.441 }' 00:13:40.441 01:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.441 01:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.441 01:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.701 01:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.701 01:14:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:41.640 01:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.640 01:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.640 01:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.640 01:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.640 01:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.640 01:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.640 01:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.640 01:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.640 01:14:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.640 01:14:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.640 01:14:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.640 01:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.640 "name": "raid_bdev1", 00:13:41.640 "uuid": "0d96fed2-a20d-49fa-8963-be9d549c8731", 00:13:41.640 "strip_size_kb": 64, 00:13:41.640 "state": "online", 00:13:41.640 "raid_level": "raid5f", 00:13:41.640 "superblock": false, 00:13:41.640 "num_base_bdevs": 3, 00:13:41.640 "num_base_bdevs_discovered": 3, 00:13:41.640 "num_base_bdevs_operational": 3, 00:13:41.640 "process": { 00:13:41.640 "type": "rebuild", 00:13:41.640 "target": "spare", 00:13:41.640 "progress": { 00:13:41.640 "blocks": 92160, 00:13:41.640 "percent": 70 00:13:41.640 } 00:13:41.640 }, 00:13:41.640 "base_bdevs_list": [ 00:13:41.640 { 00:13:41.641 "name": "spare", 00:13:41.641 "uuid": "70d1041e-fb9f-5f1e-9bb4-75e471243030", 00:13:41.641 "is_configured": true, 00:13:41.641 "data_offset": 0, 00:13:41.641 "data_size": 65536 00:13:41.641 }, 00:13:41.641 { 00:13:41.641 "name": "BaseBdev2", 00:13:41.641 "uuid": "c806d871-97b5-5990-a258-42bee243ddb5", 00:13:41.641 "is_configured": true, 00:13:41.641 "data_offset": 0, 00:13:41.641 "data_size": 65536 00:13:41.641 }, 00:13:41.641 { 00:13:41.641 "name": "BaseBdev3", 00:13:41.641 "uuid": "9c898aee-205d-529f-87d0-d3ce46ff2ebf", 00:13:41.641 "is_configured": true, 00:13:41.641 "data_offset": 0, 00:13:41.641 "data_size": 65536 00:13:41.641 } 00:13:41.641 ] 00:13:41.641 }' 00:13:41.641 01:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.641 01:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.641 01:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.641 01:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.641 01:14:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:43.021 01:14:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:43.021 01:14:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.021 01:14:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.021 01:14:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.021 01:14:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.021 01:14:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.021 01:14:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.021 01:14:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.021 01:14:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.021 01:14:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.021 01:14:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.021 01:14:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.021 "name": "raid_bdev1", 00:13:43.021 "uuid": "0d96fed2-a20d-49fa-8963-be9d549c8731", 00:13:43.021 "strip_size_kb": 64, 00:13:43.021 "state": "online", 00:13:43.021 "raid_level": "raid5f", 00:13:43.021 "superblock": false, 00:13:43.021 "num_base_bdevs": 3, 00:13:43.021 "num_base_bdevs_discovered": 3, 00:13:43.021 "num_base_bdevs_operational": 3, 00:13:43.021 "process": { 00:13:43.021 "type": "rebuild", 00:13:43.021 "target": "spare", 00:13:43.021 "progress": { 00:13:43.021 "blocks": 114688, 00:13:43.021 "percent": 87 00:13:43.021 } 00:13:43.021 }, 00:13:43.021 "base_bdevs_list": [ 00:13:43.021 { 00:13:43.021 "name": "spare", 00:13:43.021 "uuid": "70d1041e-fb9f-5f1e-9bb4-75e471243030", 00:13:43.021 "is_configured": true, 00:13:43.021 "data_offset": 0, 00:13:43.021 "data_size": 65536 00:13:43.021 }, 00:13:43.021 { 00:13:43.021 "name": "BaseBdev2", 00:13:43.021 "uuid": "c806d871-97b5-5990-a258-42bee243ddb5", 00:13:43.021 "is_configured": true, 00:13:43.021 "data_offset": 0, 00:13:43.021 "data_size": 65536 00:13:43.021 }, 00:13:43.021 { 00:13:43.021 "name": "BaseBdev3", 00:13:43.021 "uuid": "9c898aee-205d-529f-87d0-d3ce46ff2ebf", 00:13:43.021 "is_configured": true, 00:13:43.021 "data_offset": 0, 00:13:43.021 "data_size": 65536 00:13:43.021 } 00:13:43.021 ] 00:13:43.021 }' 00:13:43.021 01:14:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.021 01:14:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.021 01:14:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.021 01:14:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.021 01:14:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:43.589 [2024-10-15 01:14:56.041693] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:43.589 [2024-10-15 01:14:56.041817] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:43.589 [2024-10-15 01:14:56.041867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.848 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:43.848 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.848 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.849 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.849 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.849 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.849 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.849 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.849 01:14:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.849 01:14:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.849 01:14:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.849 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.849 "name": "raid_bdev1", 00:13:43.849 "uuid": "0d96fed2-a20d-49fa-8963-be9d549c8731", 00:13:43.849 "strip_size_kb": 64, 00:13:43.849 "state": "online", 00:13:43.849 "raid_level": "raid5f", 00:13:43.849 "superblock": false, 00:13:43.849 "num_base_bdevs": 3, 00:13:43.849 "num_base_bdevs_discovered": 3, 00:13:43.849 "num_base_bdevs_operational": 3, 00:13:43.849 "base_bdevs_list": [ 00:13:43.849 { 00:13:43.849 "name": "spare", 00:13:43.849 "uuid": "70d1041e-fb9f-5f1e-9bb4-75e471243030", 00:13:43.849 "is_configured": true, 00:13:43.849 "data_offset": 0, 00:13:43.849 "data_size": 65536 00:13:43.849 }, 00:13:43.849 { 00:13:43.849 "name": "BaseBdev2", 00:13:43.849 "uuid": "c806d871-97b5-5990-a258-42bee243ddb5", 00:13:43.849 "is_configured": true, 00:13:43.849 "data_offset": 0, 00:13:43.849 "data_size": 65536 00:13:43.849 }, 00:13:43.849 { 00:13:43.849 "name": "BaseBdev3", 00:13:43.849 "uuid": "9c898aee-205d-529f-87d0-d3ce46ff2ebf", 00:13:43.849 "is_configured": true, 00:13:43.849 "data_offset": 0, 00:13:43.849 "data_size": 65536 00:13:43.849 } 00:13:43.849 ] 00:13:43.849 }' 00:13:43.849 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.849 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:43.849 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.109 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:44.109 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:44.109 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:44.109 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.109 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:44.109 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:44.109 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.109 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.109 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.109 01:14:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.109 01:14:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.109 01:14:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.109 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.109 "name": "raid_bdev1", 00:13:44.109 "uuid": "0d96fed2-a20d-49fa-8963-be9d549c8731", 00:13:44.109 "strip_size_kb": 64, 00:13:44.109 "state": "online", 00:13:44.109 "raid_level": "raid5f", 00:13:44.109 "superblock": false, 00:13:44.109 "num_base_bdevs": 3, 00:13:44.109 "num_base_bdevs_discovered": 3, 00:13:44.109 "num_base_bdevs_operational": 3, 00:13:44.109 "base_bdevs_list": [ 00:13:44.109 { 00:13:44.109 "name": "spare", 00:13:44.109 "uuid": "70d1041e-fb9f-5f1e-9bb4-75e471243030", 00:13:44.109 "is_configured": true, 00:13:44.109 "data_offset": 0, 00:13:44.109 "data_size": 65536 00:13:44.109 }, 00:13:44.109 { 00:13:44.109 "name": "BaseBdev2", 00:13:44.109 "uuid": "c806d871-97b5-5990-a258-42bee243ddb5", 00:13:44.109 "is_configured": true, 00:13:44.109 "data_offset": 0, 00:13:44.109 "data_size": 65536 00:13:44.109 }, 00:13:44.109 { 00:13:44.109 "name": "BaseBdev3", 00:13:44.109 "uuid": "9c898aee-205d-529f-87d0-d3ce46ff2ebf", 00:13:44.109 "is_configured": true, 00:13:44.109 "data_offset": 0, 00:13:44.109 "data_size": 65536 00:13:44.109 } 00:13:44.110 ] 00:13:44.110 }' 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.110 "name": "raid_bdev1", 00:13:44.110 "uuid": "0d96fed2-a20d-49fa-8963-be9d549c8731", 00:13:44.110 "strip_size_kb": 64, 00:13:44.110 "state": "online", 00:13:44.110 "raid_level": "raid5f", 00:13:44.110 "superblock": false, 00:13:44.110 "num_base_bdevs": 3, 00:13:44.110 "num_base_bdevs_discovered": 3, 00:13:44.110 "num_base_bdevs_operational": 3, 00:13:44.110 "base_bdevs_list": [ 00:13:44.110 { 00:13:44.110 "name": "spare", 00:13:44.110 "uuid": "70d1041e-fb9f-5f1e-9bb4-75e471243030", 00:13:44.110 "is_configured": true, 00:13:44.110 "data_offset": 0, 00:13:44.110 "data_size": 65536 00:13:44.110 }, 00:13:44.110 { 00:13:44.110 "name": "BaseBdev2", 00:13:44.110 "uuid": "c806d871-97b5-5990-a258-42bee243ddb5", 00:13:44.110 "is_configured": true, 00:13:44.110 "data_offset": 0, 00:13:44.110 "data_size": 65536 00:13:44.110 }, 00:13:44.110 { 00:13:44.110 "name": "BaseBdev3", 00:13:44.110 "uuid": "9c898aee-205d-529f-87d0-d3ce46ff2ebf", 00:13:44.110 "is_configured": true, 00:13:44.110 "data_offset": 0, 00:13:44.110 "data_size": 65536 00:13:44.110 } 00:13:44.110 ] 00:13:44.110 }' 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.110 01:14:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.678 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:44.678 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.678 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.678 [2024-10-15 01:14:57.141442] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:44.678 [2024-10-15 01:14:57.141475] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:44.678 [2024-10-15 01:14:57.141567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.678 [2024-10-15 01:14:57.141654] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.678 [2024-10-15 01:14:57.141665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:44.678 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.678 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.678 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:44.678 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.678 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.678 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.678 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:44.678 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:44.678 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:44.678 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:44.678 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.678 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:44.678 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:44.678 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:44.678 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:44.679 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:44.679 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:44.679 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:44.679 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:44.679 /dev/nbd0 00:13:44.939 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:44.939 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:44.939 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:44.939 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:44.939 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:44.939 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:44.939 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:44.939 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:44.939 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:44.939 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:44.939 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.939 1+0 records in 00:13:44.939 1+0 records out 00:13:44.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449832 s, 9.1 MB/s 00:13:44.939 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.939 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:44.939 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.939 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:44.939 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:44.939 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.939 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:44.939 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:44.939 /dev/nbd1 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:45.199 1+0 records in 00:13:45.199 1+0 records out 00:13:45.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564211 s, 7.3 MB/s 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:45.199 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:45.458 01:14:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:45.458 01:14:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:45.458 01:14:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:45.458 01:14:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.458 01:14:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.459 01:14:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:45.459 01:14:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:45.459 01:14:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.459 01:14:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:45.459 01:14:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:45.718 01:14:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:45.718 01:14:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:45.718 01:14:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:45.718 01:14:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.718 01:14:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.718 01:14:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:45.718 01:14:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:45.718 01:14:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.718 01:14:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:45.718 01:14:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 91830 00:13:45.718 01:14:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 91830 ']' 00:13:45.718 01:14:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 91830 00:13:45.718 01:14:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:13:45.718 01:14:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:45.718 01:14:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91830 00:13:45.718 killing process with pid 91830 00:13:45.718 Received shutdown signal, test time was about 60.000000 seconds 00:13:45.718 00:13:45.718 Latency(us) 00:13:45.718 [2024-10-15T01:14:58.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.718 [2024-10-15T01:14:58.442Z] =================================================================================================================== 00:13:45.718 [2024-10-15T01:14:58.442Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:45.718 01:14:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:45.718 01:14:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:45.718 01:14:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91830' 00:13:45.718 01:14:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 91830 00:13:45.718 [2024-10-15 01:14:58.306428] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:45.718 01:14:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 91830 00:13:45.718 [2024-10-15 01:14:58.346563] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:45.978 00:13:45.978 real 0m13.539s 00:13:45.978 user 0m16.966s 00:13:45.978 sys 0m1.934s 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:45.978 ************************************ 00:13:45.978 END TEST raid5f_rebuild_test 00:13:45.978 ************************************ 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.978 01:14:58 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:13:45.978 01:14:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:45.978 01:14:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:45.978 01:14:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:45.978 ************************************ 00:13:45.978 START TEST raid5f_rebuild_test_sb 00:13:45.978 ************************************ 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92248 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92248 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 92248 ']' 00:13:45.978 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.979 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:45.979 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.979 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:45.979 01:14:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.239 [2024-10-15 01:14:58.728557] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:13:46.239 [2024-10-15 01:14:58.728800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:46.239 Zero copy mechanism will not be used. 00:13:46.239 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92248 ] 00:13:46.239 [2024-10-15 01:14:58.874619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.239 [2024-10-15 01:14:58.901980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.239 [2024-10-15 01:14:58.944798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.239 [2024-10-15 01:14:58.944911] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.808 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:46.808 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:46.808 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:46.808 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:46.808 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.808 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.068 BaseBdev1_malloc 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.068 [2024-10-15 01:14:59.551862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:47.068 [2024-10-15 01:14:59.551995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.068 [2024-10-15 01:14:59.552027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:47.068 [2024-10-15 01:14:59.552045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.068 [2024-10-15 01:14:59.554166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.068 [2024-10-15 01:14:59.554206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:47.068 BaseBdev1 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.068 BaseBdev2_malloc 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.068 [2024-10-15 01:14:59.580524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:47.068 [2024-10-15 01:14:59.580570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.068 [2024-10-15 01:14:59.580590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:47.068 [2024-10-15 01:14:59.580598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.068 [2024-10-15 01:14:59.582634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.068 [2024-10-15 01:14:59.582674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:47.068 BaseBdev2 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.068 BaseBdev3_malloc 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.068 [2024-10-15 01:14:59.609137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:47.068 [2024-10-15 01:14:59.609206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.068 [2024-10-15 01:14:59.609233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:47.068 [2024-10-15 01:14:59.609242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.068 [2024-10-15 01:14:59.611221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.068 [2024-10-15 01:14:59.611252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:47.068 BaseBdev3 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.068 spare_malloc 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.068 spare_delay 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.068 [2024-10-15 01:14:59.660559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:47.068 [2024-10-15 01:14:59.660657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.068 [2024-10-15 01:14:59.660688] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:47.068 [2024-10-15 01:14:59.660697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.068 [2024-10-15 01:14:59.662749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.068 [2024-10-15 01:14:59.662784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:47.068 spare 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.068 [2024-10-15 01:14:59.672619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.068 [2024-10-15 01:14:59.674399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.068 [2024-10-15 01:14:59.674460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:47.068 [2024-10-15 01:14:59.674606] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:47.068 [2024-10-15 01:14:59.674619] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:47.068 [2024-10-15 01:14:59.674857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:47.068 [2024-10-15 01:14:59.675283] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:47.068 [2024-10-15 01:14:59.675295] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:47.068 [2024-10-15 01:14:59.675408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.068 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.069 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.069 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.069 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.069 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.069 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.069 "name": "raid_bdev1", 00:13:47.069 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:13:47.069 "strip_size_kb": 64, 00:13:47.069 "state": "online", 00:13:47.069 "raid_level": "raid5f", 00:13:47.069 "superblock": true, 00:13:47.069 "num_base_bdevs": 3, 00:13:47.069 "num_base_bdevs_discovered": 3, 00:13:47.069 "num_base_bdevs_operational": 3, 00:13:47.069 "base_bdevs_list": [ 00:13:47.069 { 00:13:47.069 "name": "BaseBdev1", 00:13:47.069 "uuid": "6d3eda5d-65ae-516a-9312-6e9ca7cd8e58", 00:13:47.069 "is_configured": true, 00:13:47.069 "data_offset": 2048, 00:13:47.069 "data_size": 63488 00:13:47.069 }, 00:13:47.069 { 00:13:47.069 "name": "BaseBdev2", 00:13:47.069 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:13:47.069 "is_configured": true, 00:13:47.069 "data_offset": 2048, 00:13:47.069 "data_size": 63488 00:13:47.069 }, 00:13:47.069 { 00:13:47.069 "name": "BaseBdev3", 00:13:47.069 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:13:47.069 "is_configured": true, 00:13:47.069 "data_offset": 2048, 00:13:47.069 "data_size": 63488 00:13:47.069 } 00:13:47.069 ] 00:13:47.069 }' 00:13:47.069 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.069 01:14:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.655 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:47.655 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.655 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.655 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:47.655 [2024-10-15 01:15:00.120280] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:47.655 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.655 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:13:47.655 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.655 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.655 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.655 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:47.655 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.655 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:47.655 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:47.655 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:47.655 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:47.655 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:47.655 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:47.656 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:47.656 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:47.656 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:47.656 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:47.656 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:47.656 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:47.656 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:47.656 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:47.917 [2024-10-15 01:15:00.395779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:13:47.917 /dev/nbd0 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:47.917 1+0 records in 00:13:47.917 1+0 records out 00:13:47.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054326 s, 7.5 MB/s 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:47.917 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:13:48.177 496+0 records in 00:13:48.177 496+0 records out 00:13:48.177 65011712 bytes (65 MB, 62 MiB) copied, 0.280285 s, 232 MB/s 00:13:48.177 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:48.178 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:48.178 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:48.178 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:48.178 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:48.178 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:48.178 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:48.438 [2024-10-15 01:15:00.963203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.438 [2024-10-15 01:15:00.983249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.438 01:15:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.438 01:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.438 01:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.438 "name": "raid_bdev1", 00:13:48.438 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:13:48.438 "strip_size_kb": 64, 00:13:48.438 "state": "online", 00:13:48.438 "raid_level": "raid5f", 00:13:48.438 "superblock": true, 00:13:48.438 "num_base_bdevs": 3, 00:13:48.438 "num_base_bdevs_discovered": 2, 00:13:48.438 "num_base_bdevs_operational": 2, 00:13:48.438 "base_bdevs_list": [ 00:13:48.438 { 00:13:48.438 "name": null, 00:13:48.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.438 "is_configured": false, 00:13:48.438 "data_offset": 0, 00:13:48.438 "data_size": 63488 00:13:48.438 }, 00:13:48.438 { 00:13:48.438 "name": "BaseBdev2", 00:13:48.438 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:13:48.438 "is_configured": true, 00:13:48.438 "data_offset": 2048, 00:13:48.438 "data_size": 63488 00:13:48.438 }, 00:13:48.438 { 00:13:48.438 "name": "BaseBdev3", 00:13:48.438 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:13:48.438 "is_configured": true, 00:13:48.438 "data_offset": 2048, 00:13:48.438 "data_size": 63488 00:13:48.438 } 00:13:48.438 ] 00:13:48.438 }' 00:13:48.438 01:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.438 01:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.698 01:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:48.698 01:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.698 01:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.958 [2024-10-15 01:15:01.422548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:48.958 [2024-10-15 01:15:01.427366] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000255d0 00:13:48.958 01:15:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.958 01:15:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:48.958 [2024-10-15 01:15:01.429569] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:49.894 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.894 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.894 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.894 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.894 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.894 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.894 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.894 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.894 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.894 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.894 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.894 "name": "raid_bdev1", 00:13:49.894 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:13:49.894 "strip_size_kb": 64, 00:13:49.894 "state": "online", 00:13:49.894 "raid_level": "raid5f", 00:13:49.894 "superblock": true, 00:13:49.894 "num_base_bdevs": 3, 00:13:49.894 "num_base_bdevs_discovered": 3, 00:13:49.894 "num_base_bdevs_operational": 3, 00:13:49.894 "process": { 00:13:49.894 "type": "rebuild", 00:13:49.894 "target": "spare", 00:13:49.894 "progress": { 00:13:49.894 "blocks": 20480, 00:13:49.894 "percent": 16 00:13:49.894 } 00:13:49.894 }, 00:13:49.894 "base_bdevs_list": [ 00:13:49.894 { 00:13:49.894 "name": "spare", 00:13:49.894 "uuid": "69876e9e-e056-52c6-bbe2-e32d7e9cdce6", 00:13:49.894 "is_configured": true, 00:13:49.894 "data_offset": 2048, 00:13:49.894 "data_size": 63488 00:13:49.894 }, 00:13:49.894 { 00:13:49.894 "name": "BaseBdev2", 00:13:49.894 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:13:49.894 "is_configured": true, 00:13:49.894 "data_offset": 2048, 00:13:49.894 "data_size": 63488 00:13:49.894 }, 00:13:49.894 { 00:13:49.894 "name": "BaseBdev3", 00:13:49.894 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:13:49.894 "is_configured": true, 00:13:49.894 "data_offset": 2048, 00:13:49.894 "data_size": 63488 00:13:49.894 } 00:13:49.894 ] 00:13:49.894 }' 00:13:49.894 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.894 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.894 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.894 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.894 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:49.894 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.894 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.894 [2024-10-15 01:15:02.593736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:50.153 [2024-10-15 01:15:02.636906] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:50.153 [2024-10-15 01:15:02.636967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.153 [2024-10-15 01:15:02.636983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:50.153 [2024-10-15 01:15:02.637007] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:50.153 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.153 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:50.153 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.153 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.153 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.153 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.153 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:50.153 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.153 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.153 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.153 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.153 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.153 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.153 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.153 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.153 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.153 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.153 "name": "raid_bdev1", 00:13:50.153 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:13:50.153 "strip_size_kb": 64, 00:13:50.153 "state": "online", 00:13:50.153 "raid_level": "raid5f", 00:13:50.153 "superblock": true, 00:13:50.153 "num_base_bdevs": 3, 00:13:50.153 "num_base_bdevs_discovered": 2, 00:13:50.153 "num_base_bdevs_operational": 2, 00:13:50.153 "base_bdevs_list": [ 00:13:50.153 { 00:13:50.153 "name": null, 00:13:50.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.153 "is_configured": false, 00:13:50.153 "data_offset": 0, 00:13:50.153 "data_size": 63488 00:13:50.153 }, 00:13:50.153 { 00:13:50.153 "name": "BaseBdev2", 00:13:50.153 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:13:50.153 "is_configured": true, 00:13:50.153 "data_offset": 2048, 00:13:50.153 "data_size": 63488 00:13:50.153 }, 00:13:50.153 { 00:13:50.153 "name": "BaseBdev3", 00:13:50.153 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:13:50.153 "is_configured": true, 00:13:50.153 "data_offset": 2048, 00:13:50.153 "data_size": 63488 00:13:50.153 } 00:13:50.153 ] 00:13:50.153 }' 00:13:50.153 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.153 01:15:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.413 01:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:50.413 01:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.413 01:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:50.413 01:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:50.413 01:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.413 01:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.413 01:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.413 01:15:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.413 01:15:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.413 01:15:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.672 01:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.672 "name": "raid_bdev1", 00:13:50.672 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:13:50.672 "strip_size_kb": 64, 00:13:50.672 "state": "online", 00:13:50.672 "raid_level": "raid5f", 00:13:50.672 "superblock": true, 00:13:50.672 "num_base_bdevs": 3, 00:13:50.672 "num_base_bdevs_discovered": 2, 00:13:50.672 "num_base_bdevs_operational": 2, 00:13:50.672 "base_bdevs_list": [ 00:13:50.672 { 00:13:50.672 "name": null, 00:13:50.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.672 "is_configured": false, 00:13:50.672 "data_offset": 0, 00:13:50.672 "data_size": 63488 00:13:50.672 }, 00:13:50.672 { 00:13:50.672 "name": "BaseBdev2", 00:13:50.672 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:13:50.672 "is_configured": true, 00:13:50.672 "data_offset": 2048, 00:13:50.672 "data_size": 63488 00:13:50.672 }, 00:13:50.672 { 00:13:50.672 "name": "BaseBdev3", 00:13:50.672 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:13:50.672 "is_configured": true, 00:13:50.672 "data_offset": 2048, 00:13:50.672 "data_size": 63488 00:13:50.672 } 00:13:50.672 ] 00:13:50.672 }' 00:13:50.672 01:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.672 01:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:50.672 01:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.672 01:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:50.672 01:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:50.672 01:15:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.672 01:15:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.672 [2024-10-15 01:15:03.261954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:50.672 [2024-10-15 01:15:03.266429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000256a0 00:13:50.672 01:15:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.672 01:15:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:50.672 [2024-10-15 01:15:03.268542] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:51.610 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:51.610 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.610 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:51.610 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:51.610 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.610 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.610 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.610 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.610 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.610 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.610 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.610 "name": "raid_bdev1", 00:13:51.610 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:13:51.610 "strip_size_kb": 64, 00:13:51.610 "state": "online", 00:13:51.610 "raid_level": "raid5f", 00:13:51.610 "superblock": true, 00:13:51.610 "num_base_bdevs": 3, 00:13:51.610 "num_base_bdevs_discovered": 3, 00:13:51.610 "num_base_bdevs_operational": 3, 00:13:51.610 "process": { 00:13:51.610 "type": "rebuild", 00:13:51.610 "target": "spare", 00:13:51.610 "progress": { 00:13:51.610 "blocks": 20480, 00:13:51.610 "percent": 16 00:13:51.610 } 00:13:51.610 }, 00:13:51.610 "base_bdevs_list": [ 00:13:51.610 { 00:13:51.610 "name": "spare", 00:13:51.610 "uuid": "69876e9e-e056-52c6-bbe2-e32d7e9cdce6", 00:13:51.610 "is_configured": true, 00:13:51.610 "data_offset": 2048, 00:13:51.610 "data_size": 63488 00:13:51.610 }, 00:13:51.611 { 00:13:51.611 "name": "BaseBdev2", 00:13:51.611 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:13:51.611 "is_configured": true, 00:13:51.611 "data_offset": 2048, 00:13:51.611 "data_size": 63488 00:13:51.611 }, 00:13:51.611 { 00:13:51.611 "name": "BaseBdev3", 00:13:51.611 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:13:51.611 "is_configured": true, 00:13:51.611 "data_offset": 2048, 00:13:51.611 "data_size": 63488 00:13:51.611 } 00:13:51.611 ] 00:13:51.611 }' 00:13:51.611 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.870 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:51.870 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.870 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.870 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:51.870 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:51.870 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:51.870 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:51.870 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:51.870 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=456 00:13:51.870 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:51.870 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:51.870 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.870 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:51.870 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:51.870 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.870 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.870 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.870 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.870 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.870 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.870 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.870 "name": "raid_bdev1", 00:13:51.870 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:13:51.870 "strip_size_kb": 64, 00:13:51.870 "state": "online", 00:13:51.870 "raid_level": "raid5f", 00:13:51.871 "superblock": true, 00:13:51.871 "num_base_bdevs": 3, 00:13:51.871 "num_base_bdevs_discovered": 3, 00:13:51.871 "num_base_bdevs_operational": 3, 00:13:51.871 "process": { 00:13:51.871 "type": "rebuild", 00:13:51.871 "target": "spare", 00:13:51.871 "progress": { 00:13:51.871 "blocks": 22528, 00:13:51.871 "percent": 17 00:13:51.871 } 00:13:51.871 }, 00:13:51.871 "base_bdevs_list": [ 00:13:51.871 { 00:13:51.871 "name": "spare", 00:13:51.871 "uuid": "69876e9e-e056-52c6-bbe2-e32d7e9cdce6", 00:13:51.871 "is_configured": true, 00:13:51.871 "data_offset": 2048, 00:13:51.871 "data_size": 63488 00:13:51.871 }, 00:13:51.871 { 00:13:51.871 "name": "BaseBdev2", 00:13:51.871 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:13:51.871 "is_configured": true, 00:13:51.871 "data_offset": 2048, 00:13:51.871 "data_size": 63488 00:13:51.871 }, 00:13:51.871 { 00:13:51.871 "name": "BaseBdev3", 00:13:51.871 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:13:51.871 "is_configured": true, 00:13:51.871 "data_offset": 2048, 00:13:51.871 "data_size": 63488 00:13:51.871 } 00:13:51.871 ] 00:13:51.871 }' 00:13:51.871 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.871 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:51.871 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.871 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.871 01:15:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:53.250 01:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:53.250 01:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.250 01:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.250 01:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.250 01:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.250 01:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.250 01:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.250 01:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.250 01:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.250 01:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.250 01:15:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.250 01:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.250 "name": "raid_bdev1", 00:13:53.250 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:13:53.250 "strip_size_kb": 64, 00:13:53.250 "state": "online", 00:13:53.250 "raid_level": "raid5f", 00:13:53.250 "superblock": true, 00:13:53.250 "num_base_bdevs": 3, 00:13:53.250 "num_base_bdevs_discovered": 3, 00:13:53.250 "num_base_bdevs_operational": 3, 00:13:53.250 "process": { 00:13:53.250 "type": "rebuild", 00:13:53.250 "target": "spare", 00:13:53.250 "progress": { 00:13:53.250 "blocks": 45056, 00:13:53.250 "percent": 35 00:13:53.250 } 00:13:53.250 }, 00:13:53.250 "base_bdevs_list": [ 00:13:53.250 { 00:13:53.250 "name": "spare", 00:13:53.250 "uuid": "69876e9e-e056-52c6-bbe2-e32d7e9cdce6", 00:13:53.250 "is_configured": true, 00:13:53.250 "data_offset": 2048, 00:13:53.250 "data_size": 63488 00:13:53.250 }, 00:13:53.250 { 00:13:53.250 "name": "BaseBdev2", 00:13:53.250 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:13:53.250 "is_configured": true, 00:13:53.250 "data_offset": 2048, 00:13:53.250 "data_size": 63488 00:13:53.250 }, 00:13:53.250 { 00:13:53.250 "name": "BaseBdev3", 00:13:53.250 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:13:53.250 "is_configured": true, 00:13:53.250 "data_offset": 2048, 00:13:53.250 "data_size": 63488 00:13:53.250 } 00:13:53.250 ] 00:13:53.250 }' 00:13:53.250 01:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.250 01:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.250 01:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.250 01:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.250 01:15:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:54.189 01:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:54.189 01:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.189 01:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.189 01:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.189 01:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.189 01:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.189 01:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.189 01:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.189 01:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.190 01:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.190 01:15:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.190 01:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.190 "name": "raid_bdev1", 00:13:54.190 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:13:54.190 "strip_size_kb": 64, 00:13:54.190 "state": "online", 00:13:54.190 "raid_level": "raid5f", 00:13:54.190 "superblock": true, 00:13:54.190 "num_base_bdevs": 3, 00:13:54.190 "num_base_bdevs_discovered": 3, 00:13:54.190 "num_base_bdevs_operational": 3, 00:13:54.190 "process": { 00:13:54.190 "type": "rebuild", 00:13:54.190 "target": "spare", 00:13:54.190 "progress": { 00:13:54.190 "blocks": 69632, 00:13:54.190 "percent": 54 00:13:54.190 } 00:13:54.190 }, 00:13:54.190 "base_bdevs_list": [ 00:13:54.190 { 00:13:54.190 "name": "spare", 00:13:54.190 "uuid": "69876e9e-e056-52c6-bbe2-e32d7e9cdce6", 00:13:54.190 "is_configured": true, 00:13:54.190 "data_offset": 2048, 00:13:54.190 "data_size": 63488 00:13:54.190 }, 00:13:54.190 { 00:13:54.190 "name": "BaseBdev2", 00:13:54.190 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:13:54.190 "is_configured": true, 00:13:54.190 "data_offset": 2048, 00:13:54.190 "data_size": 63488 00:13:54.190 }, 00:13:54.190 { 00:13:54.190 "name": "BaseBdev3", 00:13:54.190 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:13:54.190 "is_configured": true, 00:13:54.190 "data_offset": 2048, 00:13:54.190 "data_size": 63488 00:13:54.190 } 00:13:54.190 ] 00:13:54.190 }' 00:13:54.190 01:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.190 01:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.190 01:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.190 01:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.190 01:15:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:55.128 01:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:55.128 01:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.128 01:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.128 01:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.128 01:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.128 01:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.128 01:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.128 01:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.128 01:15:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.128 01:15:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.389 01:15:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.389 01:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.389 "name": "raid_bdev1", 00:13:55.389 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:13:55.389 "strip_size_kb": 64, 00:13:55.389 "state": "online", 00:13:55.389 "raid_level": "raid5f", 00:13:55.389 "superblock": true, 00:13:55.389 "num_base_bdevs": 3, 00:13:55.389 "num_base_bdevs_discovered": 3, 00:13:55.390 "num_base_bdevs_operational": 3, 00:13:55.390 "process": { 00:13:55.390 "type": "rebuild", 00:13:55.390 "target": "spare", 00:13:55.390 "progress": { 00:13:55.390 "blocks": 92160, 00:13:55.390 "percent": 72 00:13:55.390 } 00:13:55.390 }, 00:13:55.390 "base_bdevs_list": [ 00:13:55.390 { 00:13:55.390 "name": "spare", 00:13:55.390 "uuid": "69876e9e-e056-52c6-bbe2-e32d7e9cdce6", 00:13:55.390 "is_configured": true, 00:13:55.390 "data_offset": 2048, 00:13:55.390 "data_size": 63488 00:13:55.390 }, 00:13:55.390 { 00:13:55.390 "name": "BaseBdev2", 00:13:55.390 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:13:55.390 "is_configured": true, 00:13:55.390 "data_offset": 2048, 00:13:55.390 "data_size": 63488 00:13:55.390 }, 00:13:55.390 { 00:13:55.390 "name": "BaseBdev3", 00:13:55.390 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:13:55.390 "is_configured": true, 00:13:55.390 "data_offset": 2048, 00:13:55.390 "data_size": 63488 00:13:55.390 } 00:13:55.390 ] 00:13:55.390 }' 00:13:55.390 01:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.390 01:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.390 01:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.390 01:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.390 01:15:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:56.332 01:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:56.332 01:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.332 01:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.332 01:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.332 01:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.332 01:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.332 01:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.332 01:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.332 01:15:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.332 01:15:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.332 01:15:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.332 01:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.332 "name": "raid_bdev1", 00:13:56.332 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:13:56.332 "strip_size_kb": 64, 00:13:56.332 "state": "online", 00:13:56.332 "raid_level": "raid5f", 00:13:56.332 "superblock": true, 00:13:56.332 "num_base_bdevs": 3, 00:13:56.332 "num_base_bdevs_discovered": 3, 00:13:56.332 "num_base_bdevs_operational": 3, 00:13:56.332 "process": { 00:13:56.332 "type": "rebuild", 00:13:56.332 "target": "spare", 00:13:56.332 "progress": { 00:13:56.332 "blocks": 114688, 00:13:56.332 "percent": 90 00:13:56.332 } 00:13:56.332 }, 00:13:56.332 "base_bdevs_list": [ 00:13:56.332 { 00:13:56.332 "name": "spare", 00:13:56.332 "uuid": "69876e9e-e056-52c6-bbe2-e32d7e9cdce6", 00:13:56.333 "is_configured": true, 00:13:56.333 "data_offset": 2048, 00:13:56.333 "data_size": 63488 00:13:56.333 }, 00:13:56.333 { 00:13:56.333 "name": "BaseBdev2", 00:13:56.333 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:13:56.333 "is_configured": true, 00:13:56.333 "data_offset": 2048, 00:13:56.333 "data_size": 63488 00:13:56.333 }, 00:13:56.333 { 00:13:56.333 "name": "BaseBdev3", 00:13:56.333 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:13:56.333 "is_configured": true, 00:13:56.333 "data_offset": 2048, 00:13:56.333 "data_size": 63488 00:13:56.333 } 00:13:56.333 ] 00:13:56.333 }' 00:13:56.333 01:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.592 01:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.592 01:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.592 01:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.592 01:15:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:56.851 [2024-10-15 01:15:09.503568] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:56.851 [2024-10-15 01:15:09.503695] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:56.851 [2024-10-15 01:15:09.503832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.419 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:57.419 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.419 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.419 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.419 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.419 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.419 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.419 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.419 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.419 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.679 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.679 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.679 "name": "raid_bdev1", 00:13:57.679 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:13:57.679 "strip_size_kb": 64, 00:13:57.679 "state": "online", 00:13:57.679 "raid_level": "raid5f", 00:13:57.679 "superblock": true, 00:13:57.679 "num_base_bdevs": 3, 00:13:57.679 "num_base_bdevs_discovered": 3, 00:13:57.679 "num_base_bdevs_operational": 3, 00:13:57.679 "base_bdevs_list": [ 00:13:57.679 { 00:13:57.679 "name": "spare", 00:13:57.679 "uuid": "69876e9e-e056-52c6-bbe2-e32d7e9cdce6", 00:13:57.679 "is_configured": true, 00:13:57.679 "data_offset": 2048, 00:13:57.679 "data_size": 63488 00:13:57.679 }, 00:13:57.679 { 00:13:57.679 "name": "BaseBdev2", 00:13:57.679 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:13:57.679 "is_configured": true, 00:13:57.679 "data_offset": 2048, 00:13:57.679 "data_size": 63488 00:13:57.679 }, 00:13:57.679 { 00:13:57.679 "name": "BaseBdev3", 00:13:57.679 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:13:57.679 "is_configured": true, 00:13:57.679 "data_offset": 2048, 00:13:57.679 "data_size": 63488 00:13:57.679 } 00:13:57.679 ] 00:13:57.679 }' 00:13:57.679 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.679 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:57.679 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.679 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:57.679 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:57.679 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:57.679 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.679 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:57.679 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:57.679 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.679 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.679 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.679 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.679 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.679 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.679 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.679 "name": "raid_bdev1", 00:13:57.679 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:13:57.679 "strip_size_kb": 64, 00:13:57.679 "state": "online", 00:13:57.679 "raid_level": "raid5f", 00:13:57.679 "superblock": true, 00:13:57.679 "num_base_bdevs": 3, 00:13:57.679 "num_base_bdevs_discovered": 3, 00:13:57.679 "num_base_bdevs_operational": 3, 00:13:57.679 "base_bdevs_list": [ 00:13:57.679 { 00:13:57.679 "name": "spare", 00:13:57.679 "uuid": "69876e9e-e056-52c6-bbe2-e32d7e9cdce6", 00:13:57.679 "is_configured": true, 00:13:57.679 "data_offset": 2048, 00:13:57.679 "data_size": 63488 00:13:57.679 }, 00:13:57.679 { 00:13:57.679 "name": "BaseBdev2", 00:13:57.679 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:13:57.679 "is_configured": true, 00:13:57.679 "data_offset": 2048, 00:13:57.679 "data_size": 63488 00:13:57.679 }, 00:13:57.679 { 00:13:57.679 "name": "BaseBdev3", 00:13:57.679 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:13:57.679 "is_configured": true, 00:13:57.679 "data_offset": 2048, 00:13:57.679 "data_size": 63488 00:13:57.679 } 00:13:57.679 ] 00:13:57.679 }' 00:13:57.679 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.679 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:57.679 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.939 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:57.939 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:57.939 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.939 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.939 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.939 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.939 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.939 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.939 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.939 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.939 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.939 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.939 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.939 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.939 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.939 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.939 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.939 "name": "raid_bdev1", 00:13:57.939 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:13:57.939 "strip_size_kb": 64, 00:13:57.939 "state": "online", 00:13:57.939 "raid_level": "raid5f", 00:13:57.939 "superblock": true, 00:13:57.939 "num_base_bdevs": 3, 00:13:57.939 "num_base_bdevs_discovered": 3, 00:13:57.939 "num_base_bdevs_operational": 3, 00:13:57.939 "base_bdevs_list": [ 00:13:57.939 { 00:13:57.939 "name": "spare", 00:13:57.939 "uuid": "69876e9e-e056-52c6-bbe2-e32d7e9cdce6", 00:13:57.939 "is_configured": true, 00:13:57.939 "data_offset": 2048, 00:13:57.939 "data_size": 63488 00:13:57.940 }, 00:13:57.940 { 00:13:57.940 "name": "BaseBdev2", 00:13:57.940 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:13:57.940 "is_configured": true, 00:13:57.940 "data_offset": 2048, 00:13:57.940 "data_size": 63488 00:13:57.940 }, 00:13:57.940 { 00:13:57.940 "name": "BaseBdev3", 00:13:57.940 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:13:57.940 "is_configured": true, 00:13:57.940 "data_offset": 2048, 00:13:57.940 "data_size": 63488 00:13:57.940 } 00:13:57.940 ] 00:13:57.940 }' 00:13:57.940 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.940 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.200 [2024-10-15 01:15:10.835197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:58.200 [2024-10-15 01:15:10.835231] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:58.200 [2024-10-15 01:15:10.835317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.200 [2024-10-15 01:15:10.835399] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:58.200 [2024-10-15 01:15:10.835409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:58.200 01:15:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:58.460 /dev/nbd0 00:13:58.460 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:58.460 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:58.460 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:58.460 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:58.460 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:58.460 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:58.460 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:58.460 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:58.460 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:58.460 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:58.460 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:58.460 1+0 records in 00:13:58.460 1+0 records out 00:13:58.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337566 s, 12.1 MB/s 00:13:58.460 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.460 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:58.460 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.460 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:58.460 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:58.460 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:58.460 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:58.460 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:58.720 /dev/nbd1 00:13:58.720 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:58.720 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:58.721 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:58.721 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:58.721 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:58.721 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:58.721 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:58.721 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:58.721 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:58.721 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:58.721 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:58.721 1+0 records in 00:13:58.721 1+0 records out 00:13:58.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000590527 s, 6.9 MB/s 00:13:58.721 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.721 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:58.721 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.721 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:58.721 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:58.721 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:58.721 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:58.721 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:58.981 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:58.981 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:58.981 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:58.981 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:58.981 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:58.981 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:58.981 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:58.981 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:58.981 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:58.981 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:58.981 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.981 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.981 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:58.981 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:58.981 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.981 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:58.981 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:59.241 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:59.241 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:59.241 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:59.241 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:59.241 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:59.241 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:59.241 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:59.241 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:59.241 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:59.241 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:59.241 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.241 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.241 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.241 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:59.241 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.241 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.241 [2024-10-15 01:15:11.924153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:59.241 [2024-10-15 01:15:11.924219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.241 [2024-10-15 01:15:11.924243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:59.241 [2024-10-15 01:15:11.924252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.241 [2024-10-15 01:15:11.926442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.241 [2024-10-15 01:15:11.926476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:59.241 [2024-10-15 01:15:11.926555] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:59.241 [2024-10-15 01:15:11.926597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:59.241 [2024-10-15 01:15:11.926708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.241 [2024-10-15 01:15:11.926792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:59.241 spare 00:13:59.241 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.241 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:59.241 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.241 01:15:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.501 [2024-10-15 01:15:12.026675] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:13:59.501 [2024-10-15 01:15:12.026698] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:59.501 [2024-10-15 01:15:12.026945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043d50 00:13:59.501 [2024-10-15 01:15:12.027338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:13:59.501 [2024-10-15 01:15:12.027357] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:13:59.501 [2024-10-15 01:15:12.027481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.501 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.501 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:59.501 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.501 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.501 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.501 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.501 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.501 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.501 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.501 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.501 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.501 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.501 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.501 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.501 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.501 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.501 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.501 "name": "raid_bdev1", 00:13:59.501 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:13:59.501 "strip_size_kb": 64, 00:13:59.501 "state": "online", 00:13:59.501 "raid_level": "raid5f", 00:13:59.501 "superblock": true, 00:13:59.501 "num_base_bdevs": 3, 00:13:59.501 "num_base_bdevs_discovered": 3, 00:13:59.501 "num_base_bdevs_operational": 3, 00:13:59.501 "base_bdevs_list": [ 00:13:59.501 { 00:13:59.501 "name": "spare", 00:13:59.501 "uuid": "69876e9e-e056-52c6-bbe2-e32d7e9cdce6", 00:13:59.501 "is_configured": true, 00:13:59.501 "data_offset": 2048, 00:13:59.501 "data_size": 63488 00:13:59.501 }, 00:13:59.501 { 00:13:59.501 "name": "BaseBdev2", 00:13:59.501 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:13:59.501 "is_configured": true, 00:13:59.501 "data_offset": 2048, 00:13:59.501 "data_size": 63488 00:13:59.501 }, 00:13:59.501 { 00:13:59.501 "name": "BaseBdev3", 00:13:59.501 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:13:59.501 "is_configured": true, 00:13:59.501 "data_offset": 2048, 00:13:59.501 "data_size": 63488 00:13:59.501 } 00:13:59.501 ] 00:13:59.501 }' 00:13:59.501 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.501 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.761 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:59.761 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.761 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:59.761 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:59.761 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.761 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.761 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.761 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.761 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.761 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.021 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.021 "name": "raid_bdev1", 00:14:00.021 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:14:00.021 "strip_size_kb": 64, 00:14:00.021 "state": "online", 00:14:00.021 "raid_level": "raid5f", 00:14:00.021 "superblock": true, 00:14:00.021 "num_base_bdevs": 3, 00:14:00.021 "num_base_bdevs_discovered": 3, 00:14:00.021 "num_base_bdevs_operational": 3, 00:14:00.021 "base_bdevs_list": [ 00:14:00.021 { 00:14:00.021 "name": "spare", 00:14:00.021 "uuid": "69876e9e-e056-52c6-bbe2-e32d7e9cdce6", 00:14:00.021 "is_configured": true, 00:14:00.021 "data_offset": 2048, 00:14:00.021 "data_size": 63488 00:14:00.021 }, 00:14:00.021 { 00:14:00.021 "name": "BaseBdev2", 00:14:00.021 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:14:00.021 "is_configured": true, 00:14:00.021 "data_offset": 2048, 00:14:00.021 "data_size": 63488 00:14:00.021 }, 00:14:00.021 { 00:14:00.021 "name": "BaseBdev3", 00:14:00.021 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:14:00.021 "is_configured": true, 00:14:00.021 "data_offset": 2048, 00:14:00.021 "data_size": 63488 00:14:00.021 } 00:14:00.021 ] 00:14:00.021 }' 00:14:00.021 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.021 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:00.021 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.021 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:00.021 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.021 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.021 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.021 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:00.021 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.021 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.022 [2024-10-15 01:15:12.651814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.022 "name": "raid_bdev1", 00:14:00.022 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:14:00.022 "strip_size_kb": 64, 00:14:00.022 "state": "online", 00:14:00.022 "raid_level": "raid5f", 00:14:00.022 "superblock": true, 00:14:00.022 "num_base_bdevs": 3, 00:14:00.022 "num_base_bdevs_discovered": 2, 00:14:00.022 "num_base_bdevs_operational": 2, 00:14:00.022 "base_bdevs_list": [ 00:14:00.022 { 00:14:00.022 "name": null, 00:14:00.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.022 "is_configured": false, 00:14:00.022 "data_offset": 0, 00:14:00.022 "data_size": 63488 00:14:00.022 }, 00:14:00.022 { 00:14:00.022 "name": "BaseBdev2", 00:14:00.022 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:14:00.022 "is_configured": true, 00:14:00.022 "data_offset": 2048, 00:14:00.022 "data_size": 63488 00:14:00.022 }, 00:14:00.022 { 00:14:00.022 "name": "BaseBdev3", 00:14:00.022 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:14:00.022 "is_configured": true, 00:14:00.022 "data_offset": 2048, 00:14:00.022 "data_size": 63488 00:14:00.022 } 00:14:00.022 ] 00:14:00.022 }' 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.022 01:15:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.592 01:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:00.592 01:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.592 01:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.592 [2024-10-15 01:15:13.019238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:00.592 [2024-10-15 01:15:13.019456] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:00.592 [2024-10-15 01:15:13.019513] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:00.592 [2024-10-15 01:15:13.019584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:00.592 [2024-10-15 01:15:13.024002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043e20 00:14:00.592 01:15:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.592 01:15:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:00.592 [2024-10-15 01:15:13.026154] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.532 "name": "raid_bdev1", 00:14:01.532 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:14:01.532 "strip_size_kb": 64, 00:14:01.532 "state": "online", 00:14:01.532 "raid_level": "raid5f", 00:14:01.532 "superblock": true, 00:14:01.532 "num_base_bdevs": 3, 00:14:01.532 "num_base_bdevs_discovered": 3, 00:14:01.532 "num_base_bdevs_operational": 3, 00:14:01.532 "process": { 00:14:01.532 "type": "rebuild", 00:14:01.532 "target": "spare", 00:14:01.532 "progress": { 00:14:01.532 "blocks": 20480, 00:14:01.532 "percent": 16 00:14:01.532 } 00:14:01.532 }, 00:14:01.532 "base_bdevs_list": [ 00:14:01.532 { 00:14:01.532 "name": "spare", 00:14:01.532 "uuid": "69876e9e-e056-52c6-bbe2-e32d7e9cdce6", 00:14:01.532 "is_configured": true, 00:14:01.532 "data_offset": 2048, 00:14:01.532 "data_size": 63488 00:14:01.532 }, 00:14:01.532 { 00:14:01.532 "name": "BaseBdev2", 00:14:01.532 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:14:01.532 "is_configured": true, 00:14:01.532 "data_offset": 2048, 00:14:01.532 "data_size": 63488 00:14:01.532 }, 00:14:01.532 { 00:14:01.532 "name": "BaseBdev3", 00:14:01.532 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:14:01.532 "is_configured": true, 00:14:01.532 "data_offset": 2048, 00:14:01.532 "data_size": 63488 00:14:01.532 } 00:14:01.532 ] 00:14:01.532 }' 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.532 [2024-10-15 01:15:14.190354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:01.532 [2024-10-15 01:15:14.233082] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:01.532 [2024-10-15 01:15:14.233136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.532 [2024-10-15 01:15:14.233154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:01.532 [2024-10-15 01:15:14.233162] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.532 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.792 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.792 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.792 "name": "raid_bdev1", 00:14:01.792 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:14:01.792 "strip_size_kb": 64, 00:14:01.792 "state": "online", 00:14:01.792 "raid_level": "raid5f", 00:14:01.792 "superblock": true, 00:14:01.792 "num_base_bdevs": 3, 00:14:01.792 "num_base_bdevs_discovered": 2, 00:14:01.792 "num_base_bdevs_operational": 2, 00:14:01.792 "base_bdevs_list": [ 00:14:01.792 { 00:14:01.792 "name": null, 00:14:01.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.792 "is_configured": false, 00:14:01.792 "data_offset": 0, 00:14:01.792 "data_size": 63488 00:14:01.792 }, 00:14:01.792 { 00:14:01.792 "name": "BaseBdev2", 00:14:01.792 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:14:01.792 "is_configured": true, 00:14:01.792 "data_offset": 2048, 00:14:01.792 "data_size": 63488 00:14:01.792 }, 00:14:01.792 { 00:14:01.792 "name": "BaseBdev3", 00:14:01.792 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:14:01.792 "is_configured": true, 00:14:01.792 "data_offset": 2048, 00:14:01.792 "data_size": 63488 00:14:01.792 } 00:14:01.792 ] 00:14:01.792 }' 00:14:01.792 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.792 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.052 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:02.052 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.052 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.052 [2024-10-15 01:15:14.697701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:02.052 [2024-10-15 01:15:14.697815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.052 [2024-10-15 01:15:14.697855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:02.052 [2024-10-15 01:15:14.697883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.052 [2024-10-15 01:15:14.698339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.052 [2024-10-15 01:15:14.698394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:02.052 [2024-10-15 01:15:14.698502] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:02.052 [2024-10-15 01:15:14.698540] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:02.052 [2024-10-15 01:15:14.698581] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:02.052 [2024-10-15 01:15:14.698653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.052 [2024-10-15 01:15:14.703039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043ef0 00:14:02.052 spare 00:14:02.052 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.052 01:15:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:02.052 [2024-10-15 01:15:14.705240] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:03.000 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.000 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.000 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.000 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.000 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.000 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.000 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.000 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.000 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.274 "name": "raid_bdev1", 00:14:03.274 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:14:03.274 "strip_size_kb": 64, 00:14:03.274 "state": "online", 00:14:03.274 "raid_level": "raid5f", 00:14:03.274 "superblock": true, 00:14:03.274 "num_base_bdevs": 3, 00:14:03.274 "num_base_bdevs_discovered": 3, 00:14:03.274 "num_base_bdevs_operational": 3, 00:14:03.274 "process": { 00:14:03.274 "type": "rebuild", 00:14:03.274 "target": "spare", 00:14:03.274 "progress": { 00:14:03.274 "blocks": 20480, 00:14:03.274 "percent": 16 00:14:03.274 } 00:14:03.274 }, 00:14:03.274 "base_bdevs_list": [ 00:14:03.274 { 00:14:03.274 "name": "spare", 00:14:03.274 "uuid": "69876e9e-e056-52c6-bbe2-e32d7e9cdce6", 00:14:03.274 "is_configured": true, 00:14:03.274 "data_offset": 2048, 00:14:03.274 "data_size": 63488 00:14:03.274 }, 00:14:03.274 { 00:14:03.274 "name": "BaseBdev2", 00:14:03.274 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:14:03.274 "is_configured": true, 00:14:03.274 "data_offset": 2048, 00:14:03.274 "data_size": 63488 00:14:03.274 }, 00:14:03.274 { 00:14:03.274 "name": "BaseBdev3", 00:14:03.274 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:14:03.274 "is_configured": true, 00:14:03.274 "data_offset": 2048, 00:14:03.274 "data_size": 63488 00:14:03.274 } 00:14:03.274 ] 00:14:03.274 }' 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.274 [2024-10-15 01:15:15.861358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.274 [2024-10-15 01:15:15.912469] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:03.274 [2024-10-15 01:15:15.912528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.274 [2024-10-15 01:15:15.912544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.274 [2024-10-15 01:15:15.912555] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.274 "name": "raid_bdev1", 00:14:03.274 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:14:03.274 "strip_size_kb": 64, 00:14:03.274 "state": "online", 00:14:03.274 "raid_level": "raid5f", 00:14:03.274 "superblock": true, 00:14:03.274 "num_base_bdevs": 3, 00:14:03.274 "num_base_bdevs_discovered": 2, 00:14:03.274 "num_base_bdevs_operational": 2, 00:14:03.274 "base_bdevs_list": [ 00:14:03.274 { 00:14:03.274 "name": null, 00:14:03.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.274 "is_configured": false, 00:14:03.274 "data_offset": 0, 00:14:03.274 "data_size": 63488 00:14:03.274 }, 00:14:03.274 { 00:14:03.274 "name": "BaseBdev2", 00:14:03.274 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:14:03.274 "is_configured": true, 00:14:03.274 "data_offset": 2048, 00:14:03.274 "data_size": 63488 00:14:03.274 }, 00:14:03.274 { 00:14:03.274 "name": "BaseBdev3", 00:14:03.274 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:14:03.274 "is_configured": true, 00:14:03.274 "data_offset": 2048, 00:14:03.274 "data_size": 63488 00:14:03.274 } 00:14:03.274 ] 00:14:03.274 }' 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.274 01:15:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.843 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:03.843 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.843 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:03.843 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:03.843 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.843 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.843 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.844 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.844 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.844 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.844 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.844 "name": "raid_bdev1", 00:14:03.844 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:14:03.844 "strip_size_kb": 64, 00:14:03.844 "state": "online", 00:14:03.844 "raid_level": "raid5f", 00:14:03.844 "superblock": true, 00:14:03.844 "num_base_bdevs": 3, 00:14:03.844 "num_base_bdevs_discovered": 2, 00:14:03.844 "num_base_bdevs_operational": 2, 00:14:03.844 "base_bdevs_list": [ 00:14:03.844 { 00:14:03.844 "name": null, 00:14:03.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.844 "is_configured": false, 00:14:03.844 "data_offset": 0, 00:14:03.844 "data_size": 63488 00:14:03.844 }, 00:14:03.844 { 00:14:03.844 "name": "BaseBdev2", 00:14:03.844 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:14:03.844 "is_configured": true, 00:14:03.844 "data_offset": 2048, 00:14:03.844 "data_size": 63488 00:14:03.844 }, 00:14:03.844 { 00:14:03.844 "name": "BaseBdev3", 00:14:03.844 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:14:03.844 "is_configured": true, 00:14:03.844 "data_offset": 2048, 00:14:03.844 "data_size": 63488 00:14:03.844 } 00:14:03.844 ] 00:14:03.844 }' 00:14:03.844 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.844 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:03.844 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.844 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:03.844 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:03.844 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.844 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.844 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.844 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:03.844 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.844 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.844 [2024-10-15 01:15:16.508974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:03.844 [2024-10-15 01:15:16.509029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.844 [2024-10-15 01:15:16.509054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:03.844 [2024-10-15 01:15:16.509067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.844 [2024-10-15 01:15:16.509465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.844 [2024-10-15 01:15:16.509483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:03.844 [2024-10-15 01:15:16.509545] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:03.844 [2024-10-15 01:15:16.509560] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:03.844 [2024-10-15 01:15:16.509568] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:03.844 [2024-10-15 01:15:16.509579] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:03.844 BaseBdev1 00:14:03.844 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.844 01:15:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:05.225 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.226 "name": "raid_bdev1", 00:14:05.226 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:14:05.226 "strip_size_kb": 64, 00:14:05.226 "state": "online", 00:14:05.226 "raid_level": "raid5f", 00:14:05.226 "superblock": true, 00:14:05.226 "num_base_bdevs": 3, 00:14:05.226 "num_base_bdevs_discovered": 2, 00:14:05.226 "num_base_bdevs_operational": 2, 00:14:05.226 "base_bdevs_list": [ 00:14:05.226 { 00:14:05.226 "name": null, 00:14:05.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.226 "is_configured": false, 00:14:05.226 "data_offset": 0, 00:14:05.226 "data_size": 63488 00:14:05.226 }, 00:14:05.226 { 00:14:05.226 "name": "BaseBdev2", 00:14:05.226 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:14:05.226 "is_configured": true, 00:14:05.226 "data_offset": 2048, 00:14:05.226 "data_size": 63488 00:14:05.226 }, 00:14:05.226 { 00:14:05.226 "name": "BaseBdev3", 00:14:05.226 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:14:05.226 "is_configured": true, 00:14:05.226 "data_offset": 2048, 00:14:05.226 "data_size": 63488 00:14:05.226 } 00:14:05.226 ] 00:14:05.226 }' 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.226 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.500 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.500 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.500 "name": "raid_bdev1", 00:14:05.500 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:14:05.500 "strip_size_kb": 64, 00:14:05.500 "state": "online", 00:14:05.500 "raid_level": "raid5f", 00:14:05.500 "superblock": true, 00:14:05.500 "num_base_bdevs": 3, 00:14:05.500 "num_base_bdevs_discovered": 2, 00:14:05.500 "num_base_bdevs_operational": 2, 00:14:05.500 "base_bdevs_list": [ 00:14:05.500 { 00:14:05.500 "name": null, 00:14:05.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.500 "is_configured": false, 00:14:05.500 "data_offset": 0, 00:14:05.500 "data_size": 63488 00:14:05.500 }, 00:14:05.500 { 00:14:05.500 "name": "BaseBdev2", 00:14:05.501 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:14:05.501 "is_configured": true, 00:14:05.501 "data_offset": 2048, 00:14:05.501 "data_size": 63488 00:14:05.501 }, 00:14:05.501 { 00:14:05.501 "name": "BaseBdev3", 00:14:05.501 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:14:05.501 "is_configured": true, 00:14:05.501 "data_offset": 2048, 00:14:05.501 "data_size": 63488 00:14:05.501 } 00:14:05.501 ] 00:14:05.501 }' 00:14:05.501 01:15:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.501 01:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:05.501 01:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.501 01:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:05.501 01:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:05.501 01:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:05.501 01:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:05.501 01:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:05.501 01:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.501 01:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:05.501 01:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.501 01:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:05.502 01:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.502 01:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.502 [2024-10-15 01:15:18.078436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:05.502 [2024-10-15 01:15:18.078631] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:05.502 [2024-10-15 01:15:18.078686] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:05.502 request: 00:14:05.502 { 00:14:05.502 "base_bdev": "BaseBdev1", 00:14:05.502 "raid_bdev": "raid_bdev1", 00:14:05.502 "method": "bdev_raid_add_base_bdev", 00:14:05.502 "req_id": 1 00:14:05.502 } 00:14:05.502 Got JSON-RPC error response 00:14:05.502 response: 00:14:05.502 { 00:14:05.502 "code": -22, 00:14:05.502 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:05.502 } 00:14:05.502 01:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:05.502 01:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:05.502 01:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:05.502 01:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:05.502 01:15:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:05.502 01:15:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:06.444 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:06.444 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.444 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.444 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.444 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.444 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:06.444 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.444 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.444 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.444 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.444 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.444 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.444 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.444 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.444 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.444 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.444 "name": "raid_bdev1", 00:14:06.444 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:14:06.444 "strip_size_kb": 64, 00:14:06.444 "state": "online", 00:14:06.444 "raid_level": "raid5f", 00:14:06.444 "superblock": true, 00:14:06.444 "num_base_bdevs": 3, 00:14:06.444 "num_base_bdevs_discovered": 2, 00:14:06.444 "num_base_bdevs_operational": 2, 00:14:06.444 "base_bdevs_list": [ 00:14:06.444 { 00:14:06.444 "name": null, 00:14:06.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.444 "is_configured": false, 00:14:06.444 "data_offset": 0, 00:14:06.444 "data_size": 63488 00:14:06.444 }, 00:14:06.444 { 00:14:06.444 "name": "BaseBdev2", 00:14:06.444 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:14:06.444 "is_configured": true, 00:14:06.444 "data_offset": 2048, 00:14:06.444 "data_size": 63488 00:14:06.444 }, 00:14:06.444 { 00:14:06.444 "name": "BaseBdev3", 00:14:06.444 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:14:06.444 "is_configured": true, 00:14:06.444 "data_offset": 2048, 00:14:06.444 "data_size": 63488 00:14:06.444 } 00:14:06.444 ] 00:14:06.444 }' 00:14:06.444 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.444 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.012 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.013 "name": "raid_bdev1", 00:14:07.013 "uuid": "667f082f-18e8-4421-bee2-66bbd3326b0a", 00:14:07.013 "strip_size_kb": 64, 00:14:07.013 "state": "online", 00:14:07.013 "raid_level": "raid5f", 00:14:07.013 "superblock": true, 00:14:07.013 "num_base_bdevs": 3, 00:14:07.013 "num_base_bdevs_discovered": 2, 00:14:07.013 "num_base_bdevs_operational": 2, 00:14:07.013 "base_bdevs_list": [ 00:14:07.013 { 00:14:07.013 "name": null, 00:14:07.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.013 "is_configured": false, 00:14:07.013 "data_offset": 0, 00:14:07.013 "data_size": 63488 00:14:07.013 }, 00:14:07.013 { 00:14:07.013 "name": "BaseBdev2", 00:14:07.013 "uuid": "09bf59af-3c1c-51af-bd5e-64194e2c0d87", 00:14:07.013 "is_configured": true, 00:14:07.013 "data_offset": 2048, 00:14:07.013 "data_size": 63488 00:14:07.013 }, 00:14:07.013 { 00:14:07.013 "name": "BaseBdev3", 00:14:07.013 "uuid": "426201c1-b417-54a5-9221-75a430bfd973", 00:14:07.013 "is_configured": true, 00:14:07.013 "data_offset": 2048, 00:14:07.013 "data_size": 63488 00:14:07.013 } 00:14:07.013 ] 00:14:07.013 }' 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92248 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 92248 ']' 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 92248 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92248 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:07.013 killing process with pid 92248 00:14:07.013 Received shutdown signal, test time was about 60.000000 seconds 00:14:07.013 00:14:07.013 Latency(us) 00:14:07.013 [2024-10-15T01:15:19.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.013 [2024-10-15T01:15:19.737Z] =================================================================================================================== 00:14:07.013 [2024-10-15T01:15:19.737Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92248' 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 92248 00:14:07.013 [2024-10-15 01:15:19.733994] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:07.013 [2024-10-15 01:15:19.734109] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.013 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 92248 00:14:07.013 [2024-10-15 01:15:19.734174] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.013 [2024-10-15 01:15:19.734184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:14:07.273 [2024-10-15 01:15:19.774534] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:07.273 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:07.273 00:14:07.273 real 0m21.351s 00:14:07.273 user 0m27.732s 00:14:07.273 sys 0m2.700s 00:14:07.273 ************************************ 00:14:07.273 END TEST raid5f_rebuild_test_sb 00:14:07.273 ************************************ 00:14:07.273 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:07.273 01:15:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.533 01:15:20 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:07.533 01:15:20 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:14:07.533 01:15:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:07.533 01:15:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:07.533 01:15:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:07.533 ************************************ 00:14:07.533 START TEST raid5f_state_function_test 00:14:07.533 ************************************ 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=92983 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 92983' 00:14:07.533 Process raid pid: 92983 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 92983 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 92983 ']' 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:07.533 01:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.533 [2024-10-15 01:15:20.139246] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:14:07.533 [2024-10-15 01:15:20.139365] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.793 [2024-10-15 01:15:20.285789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.793 [2024-10-15 01:15:20.311909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.793 [2024-10-15 01:15:20.355497] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.793 [2024-10-15 01:15:20.355526] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.364 01:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:08.364 01:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:08.364 01:15:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:08.364 01:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.364 01:15:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.364 [2024-10-15 01:15:21.005667] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:08.364 [2024-10-15 01:15:21.005721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:08.364 [2024-10-15 01:15:21.005730] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:08.364 [2024-10-15 01:15:21.005741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:08.364 [2024-10-15 01:15:21.005747] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:08.364 [2024-10-15 01:15:21.005758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:08.364 [2024-10-15 01:15:21.005764] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:08.364 [2024-10-15 01:15:21.005772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:08.364 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.364 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:08.364 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.364 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.364 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.364 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.364 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.364 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.364 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.364 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.364 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.364 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.364 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.364 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.364 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.364 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.364 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.364 "name": "Existed_Raid", 00:14:08.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.364 "strip_size_kb": 64, 00:14:08.364 "state": "configuring", 00:14:08.364 "raid_level": "raid5f", 00:14:08.364 "superblock": false, 00:14:08.364 "num_base_bdevs": 4, 00:14:08.364 "num_base_bdevs_discovered": 0, 00:14:08.364 "num_base_bdevs_operational": 4, 00:14:08.364 "base_bdevs_list": [ 00:14:08.364 { 00:14:08.364 "name": "BaseBdev1", 00:14:08.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.364 "is_configured": false, 00:14:08.364 "data_offset": 0, 00:14:08.364 "data_size": 0 00:14:08.364 }, 00:14:08.364 { 00:14:08.364 "name": "BaseBdev2", 00:14:08.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.364 "is_configured": false, 00:14:08.364 "data_offset": 0, 00:14:08.364 "data_size": 0 00:14:08.364 }, 00:14:08.364 { 00:14:08.364 "name": "BaseBdev3", 00:14:08.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.364 "is_configured": false, 00:14:08.364 "data_offset": 0, 00:14:08.364 "data_size": 0 00:14:08.364 }, 00:14:08.364 { 00:14:08.364 "name": "BaseBdev4", 00:14:08.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.364 "is_configured": false, 00:14:08.364 "data_offset": 0, 00:14:08.364 "data_size": 0 00:14:08.364 } 00:14:08.364 ] 00:14:08.364 }' 00:14:08.364 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.364 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.934 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:08.934 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.934 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.934 [2024-10-15 01:15:21.496730] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:08.934 [2024-10-15 01:15:21.496841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:14:08.934 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.934 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:08.934 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.934 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.934 [2024-10-15 01:15:21.508728] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:08.934 [2024-10-15 01:15:21.508800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:08.935 [2024-10-15 01:15:21.508827] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:08.935 [2024-10-15 01:15:21.508849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:08.935 [2024-10-15 01:15:21.508866] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:08.935 [2024-10-15 01:15:21.508886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:08.935 [2024-10-15 01:15:21.508903] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:08.935 [2024-10-15 01:15:21.508923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.935 [2024-10-15 01:15:21.529750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:08.935 BaseBdev1 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.935 [ 00:14:08.935 { 00:14:08.935 "name": "BaseBdev1", 00:14:08.935 "aliases": [ 00:14:08.935 "477833d2-6410-480a-a328-a8a9d217985f" 00:14:08.935 ], 00:14:08.935 "product_name": "Malloc disk", 00:14:08.935 "block_size": 512, 00:14:08.935 "num_blocks": 65536, 00:14:08.935 "uuid": "477833d2-6410-480a-a328-a8a9d217985f", 00:14:08.935 "assigned_rate_limits": { 00:14:08.935 "rw_ios_per_sec": 0, 00:14:08.935 "rw_mbytes_per_sec": 0, 00:14:08.935 "r_mbytes_per_sec": 0, 00:14:08.935 "w_mbytes_per_sec": 0 00:14:08.935 }, 00:14:08.935 "claimed": true, 00:14:08.935 "claim_type": "exclusive_write", 00:14:08.935 "zoned": false, 00:14:08.935 "supported_io_types": { 00:14:08.935 "read": true, 00:14:08.935 "write": true, 00:14:08.935 "unmap": true, 00:14:08.935 "flush": true, 00:14:08.935 "reset": true, 00:14:08.935 "nvme_admin": false, 00:14:08.935 "nvme_io": false, 00:14:08.935 "nvme_io_md": false, 00:14:08.935 "write_zeroes": true, 00:14:08.935 "zcopy": true, 00:14:08.935 "get_zone_info": false, 00:14:08.935 "zone_management": false, 00:14:08.935 "zone_append": false, 00:14:08.935 "compare": false, 00:14:08.935 "compare_and_write": false, 00:14:08.935 "abort": true, 00:14:08.935 "seek_hole": false, 00:14:08.935 "seek_data": false, 00:14:08.935 "copy": true, 00:14:08.935 "nvme_iov_md": false 00:14:08.935 }, 00:14:08.935 "memory_domains": [ 00:14:08.935 { 00:14:08.935 "dma_device_id": "system", 00:14:08.935 "dma_device_type": 1 00:14:08.935 }, 00:14:08.935 { 00:14:08.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.935 "dma_device_type": 2 00:14:08.935 } 00:14:08.935 ], 00:14:08.935 "driver_specific": {} 00:14:08.935 } 00:14:08.935 ] 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.935 "name": "Existed_Raid", 00:14:08.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.935 "strip_size_kb": 64, 00:14:08.935 "state": "configuring", 00:14:08.935 "raid_level": "raid5f", 00:14:08.935 "superblock": false, 00:14:08.935 "num_base_bdevs": 4, 00:14:08.935 "num_base_bdevs_discovered": 1, 00:14:08.935 "num_base_bdevs_operational": 4, 00:14:08.935 "base_bdevs_list": [ 00:14:08.935 { 00:14:08.935 "name": "BaseBdev1", 00:14:08.935 "uuid": "477833d2-6410-480a-a328-a8a9d217985f", 00:14:08.935 "is_configured": true, 00:14:08.935 "data_offset": 0, 00:14:08.935 "data_size": 65536 00:14:08.935 }, 00:14:08.935 { 00:14:08.935 "name": "BaseBdev2", 00:14:08.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.935 "is_configured": false, 00:14:08.935 "data_offset": 0, 00:14:08.935 "data_size": 0 00:14:08.935 }, 00:14:08.935 { 00:14:08.935 "name": "BaseBdev3", 00:14:08.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.935 "is_configured": false, 00:14:08.935 "data_offset": 0, 00:14:08.935 "data_size": 0 00:14:08.935 }, 00:14:08.935 { 00:14:08.935 "name": "BaseBdev4", 00:14:08.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.935 "is_configured": false, 00:14:08.935 "data_offset": 0, 00:14:08.935 "data_size": 0 00:14:08.935 } 00:14:08.935 ] 00:14:08.935 }' 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.935 01:15:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.505 [2024-10-15 01:15:22.060867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:09.505 [2024-10-15 01:15:22.060921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.505 [2024-10-15 01:15:22.072910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.505 [2024-10-15 01:15:22.074774] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:09.505 [2024-10-15 01:15:22.074864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:09.505 [2024-10-15 01:15:22.074894] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:09.505 [2024-10-15 01:15:22.074928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:09.505 [2024-10-15 01:15:22.074946] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:09.505 [2024-10-15 01:15:22.074981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.505 "name": "Existed_Raid", 00:14:09.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.505 "strip_size_kb": 64, 00:14:09.505 "state": "configuring", 00:14:09.505 "raid_level": "raid5f", 00:14:09.505 "superblock": false, 00:14:09.505 "num_base_bdevs": 4, 00:14:09.505 "num_base_bdevs_discovered": 1, 00:14:09.505 "num_base_bdevs_operational": 4, 00:14:09.505 "base_bdevs_list": [ 00:14:09.505 { 00:14:09.505 "name": "BaseBdev1", 00:14:09.505 "uuid": "477833d2-6410-480a-a328-a8a9d217985f", 00:14:09.505 "is_configured": true, 00:14:09.505 "data_offset": 0, 00:14:09.505 "data_size": 65536 00:14:09.505 }, 00:14:09.505 { 00:14:09.505 "name": "BaseBdev2", 00:14:09.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.505 "is_configured": false, 00:14:09.505 "data_offset": 0, 00:14:09.505 "data_size": 0 00:14:09.505 }, 00:14:09.505 { 00:14:09.505 "name": "BaseBdev3", 00:14:09.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.505 "is_configured": false, 00:14:09.505 "data_offset": 0, 00:14:09.505 "data_size": 0 00:14:09.505 }, 00:14:09.505 { 00:14:09.505 "name": "BaseBdev4", 00:14:09.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.505 "is_configured": false, 00:14:09.505 "data_offset": 0, 00:14:09.505 "data_size": 0 00:14:09.505 } 00:14:09.505 ] 00:14:09.505 }' 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.505 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.076 [2024-10-15 01:15:22.515161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.076 BaseBdev2 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.076 [ 00:14:10.076 { 00:14:10.076 "name": "BaseBdev2", 00:14:10.076 "aliases": [ 00:14:10.076 "4e573669-42a2-4f71-b7a0-1959b4aad381" 00:14:10.076 ], 00:14:10.076 "product_name": "Malloc disk", 00:14:10.076 "block_size": 512, 00:14:10.076 "num_blocks": 65536, 00:14:10.076 "uuid": "4e573669-42a2-4f71-b7a0-1959b4aad381", 00:14:10.076 "assigned_rate_limits": { 00:14:10.076 "rw_ios_per_sec": 0, 00:14:10.076 "rw_mbytes_per_sec": 0, 00:14:10.076 "r_mbytes_per_sec": 0, 00:14:10.076 "w_mbytes_per_sec": 0 00:14:10.076 }, 00:14:10.076 "claimed": true, 00:14:10.076 "claim_type": "exclusive_write", 00:14:10.076 "zoned": false, 00:14:10.076 "supported_io_types": { 00:14:10.076 "read": true, 00:14:10.076 "write": true, 00:14:10.076 "unmap": true, 00:14:10.076 "flush": true, 00:14:10.076 "reset": true, 00:14:10.076 "nvme_admin": false, 00:14:10.076 "nvme_io": false, 00:14:10.076 "nvme_io_md": false, 00:14:10.076 "write_zeroes": true, 00:14:10.076 "zcopy": true, 00:14:10.076 "get_zone_info": false, 00:14:10.076 "zone_management": false, 00:14:10.076 "zone_append": false, 00:14:10.076 "compare": false, 00:14:10.076 "compare_and_write": false, 00:14:10.076 "abort": true, 00:14:10.076 "seek_hole": false, 00:14:10.076 "seek_data": false, 00:14:10.076 "copy": true, 00:14:10.076 "nvme_iov_md": false 00:14:10.076 }, 00:14:10.076 "memory_domains": [ 00:14:10.076 { 00:14:10.076 "dma_device_id": "system", 00:14:10.076 "dma_device_type": 1 00:14:10.076 }, 00:14:10.076 { 00:14:10.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.076 "dma_device_type": 2 00:14:10.076 } 00:14:10.076 ], 00:14:10.076 "driver_specific": {} 00:14:10.076 } 00:14:10.076 ] 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.076 "name": "Existed_Raid", 00:14:10.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.076 "strip_size_kb": 64, 00:14:10.076 "state": "configuring", 00:14:10.076 "raid_level": "raid5f", 00:14:10.076 "superblock": false, 00:14:10.076 "num_base_bdevs": 4, 00:14:10.076 "num_base_bdevs_discovered": 2, 00:14:10.076 "num_base_bdevs_operational": 4, 00:14:10.076 "base_bdevs_list": [ 00:14:10.076 { 00:14:10.076 "name": "BaseBdev1", 00:14:10.076 "uuid": "477833d2-6410-480a-a328-a8a9d217985f", 00:14:10.076 "is_configured": true, 00:14:10.076 "data_offset": 0, 00:14:10.076 "data_size": 65536 00:14:10.076 }, 00:14:10.076 { 00:14:10.076 "name": "BaseBdev2", 00:14:10.076 "uuid": "4e573669-42a2-4f71-b7a0-1959b4aad381", 00:14:10.076 "is_configured": true, 00:14:10.076 "data_offset": 0, 00:14:10.076 "data_size": 65536 00:14:10.076 }, 00:14:10.076 { 00:14:10.076 "name": "BaseBdev3", 00:14:10.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.076 "is_configured": false, 00:14:10.076 "data_offset": 0, 00:14:10.076 "data_size": 0 00:14:10.076 }, 00:14:10.076 { 00:14:10.076 "name": "BaseBdev4", 00:14:10.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.076 "is_configured": false, 00:14:10.076 "data_offset": 0, 00:14:10.076 "data_size": 0 00:14:10.076 } 00:14:10.076 ] 00:14:10.076 }' 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.076 01:15:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.337 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:10.337 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.337 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.337 [2024-10-15 01:15:23.025783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.337 BaseBdev3 00:14:10.337 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.337 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:10.337 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:10.337 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:10.337 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:10.337 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:10.337 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:10.337 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:10.337 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.337 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.337 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.337 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:10.337 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.337 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.337 [ 00:14:10.337 { 00:14:10.337 "name": "BaseBdev3", 00:14:10.337 "aliases": [ 00:14:10.337 "15f7c99e-68a6-48f8-9670-4bbf70b1c452" 00:14:10.337 ], 00:14:10.337 "product_name": "Malloc disk", 00:14:10.337 "block_size": 512, 00:14:10.337 "num_blocks": 65536, 00:14:10.337 "uuid": "15f7c99e-68a6-48f8-9670-4bbf70b1c452", 00:14:10.337 "assigned_rate_limits": { 00:14:10.337 "rw_ios_per_sec": 0, 00:14:10.337 "rw_mbytes_per_sec": 0, 00:14:10.337 "r_mbytes_per_sec": 0, 00:14:10.337 "w_mbytes_per_sec": 0 00:14:10.337 }, 00:14:10.337 "claimed": true, 00:14:10.337 "claim_type": "exclusive_write", 00:14:10.337 "zoned": false, 00:14:10.337 "supported_io_types": { 00:14:10.337 "read": true, 00:14:10.337 "write": true, 00:14:10.337 "unmap": true, 00:14:10.337 "flush": true, 00:14:10.337 "reset": true, 00:14:10.337 "nvme_admin": false, 00:14:10.337 "nvme_io": false, 00:14:10.337 "nvme_io_md": false, 00:14:10.337 "write_zeroes": true, 00:14:10.337 "zcopy": true, 00:14:10.337 "get_zone_info": false, 00:14:10.337 "zone_management": false, 00:14:10.337 "zone_append": false, 00:14:10.337 "compare": false, 00:14:10.337 "compare_and_write": false, 00:14:10.337 "abort": true, 00:14:10.337 "seek_hole": false, 00:14:10.337 "seek_data": false, 00:14:10.337 "copy": true, 00:14:10.597 "nvme_iov_md": false 00:14:10.597 }, 00:14:10.597 "memory_domains": [ 00:14:10.597 { 00:14:10.597 "dma_device_id": "system", 00:14:10.597 "dma_device_type": 1 00:14:10.597 }, 00:14:10.597 { 00:14:10.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.597 "dma_device_type": 2 00:14:10.597 } 00:14:10.597 ], 00:14:10.597 "driver_specific": {} 00:14:10.597 } 00:14:10.597 ] 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.597 "name": "Existed_Raid", 00:14:10.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.597 "strip_size_kb": 64, 00:14:10.597 "state": "configuring", 00:14:10.597 "raid_level": "raid5f", 00:14:10.597 "superblock": false, 00:14:10.597 "num_base_bdevs": 4, 00:14:10.597 "num_base_bdevs_discovered": 3, 00:14:10.597 "num_base_bdevs_operational": 4, 00:14:10.597 "base_bdevs_list": [ 00:14:10.597 { 00:14:10.597 "name": "BaseBdev1", 00:14:10.597 "uuid": "477833d2-6410-480a-a328-a8a9d217985f", 00:14:10.597 "is_configured": true, 00:14:10.597 "data_offset": 0, 00:14:10.597 "data_size": 65536 00:14:10.597 }, 00:14:10.597 { 00:14:10.597 "name": "BaseBdev2", 00:14:10.597 "uuid": "4e573669-42a2-4f71-b7a0-1959b4aad381", 00:14:10.597 "is_configured": true, 00:14:10.597 "data_offset": 0, 00:14:10.597 "data_size": 65536 00:14:10.597 }, 00:14:10.597 { 00:14:10.597 "name": "BaseBdev3", 00:14:10.597 "uuid": "15f7c99e-68a6-48f8-9670-4bbf70b1c452", 00:14:10.597 "is_configured": true, 00:14:10.597 "data_offset": 0, 00:14:10.597 "data_size": 65536 00:14:10.597 }, 00:14:10.597 { 00:14:10.597 "name": "BaseBdev4", 00:14:10.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.597 "is_configured": false, 00:14:10.597 "data_offset": 0, 00:14:10.597 "data_size": 0 00:14:10.597 } 00:14:10.597 ] 00:14:10.597 }' 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.597 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.857 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:10.857 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.857 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.857 [2024-10-15 01:15:23.508162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:10.858 [2024-10-15 01:15:23.508296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:10.858 [2024-10-15 01:15:23.508322] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:10.858 [2024-10-15 01:15:23.508653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:10.858 [2024-10-15 01:15:23.509155] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:10.858 [2024-10-15 01:15:23.509238] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:14:10.858 [2024-10-15 01:15:23.509498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.858 BaseBdev4 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.858 [ 00:14:10.858 { 00:14:10.858 "name": "BaseBdev4", 00:14:10.858 "aliases": [ 00:14:10.858 "a47935d6-c98b-4010-87af-d3f0a6d66d13" 00:14:10.858 ], 00:14:10.858 "product_name": "Malloc disk", 00:14:10.858 "block_size": 512, 00:14:10.858 "num_blocks": 65536, 00:14:10.858 "uuid": "a47935d6-c98b-4010-87af-d3f0a6d66d13", 00:14:10.858 "assigned_rate_limits": { 00:14:10.858 "rw_ios_per_sec": 0, 00:14:10.858 "rw_mbytes_per_sec": 0, 00:14:10.858 "r_mbytes_per_sec": 0, 00:14:10.858 "w_mbytes_per_sec": 0 00:14:10.858 }, 00:14:10.858 "claimed": true, 00:14:10.858 "claim_type": "exclusive_write", 00:14:10.858 "zoned": false, 00:14:10.858 "supported_io_types": { 00:14:10.858 "read": true, 00:14:10.858 "write": true, 00:14:10.858 "unmap": true, 00:14:10.858 "flush": true, 00:14:10.858 "reset": true, 00:14:10.858 "nvme_admin": false, 00:14:10.858 "nvme_io": false, 00:14:10.858 "nvme_io_md": false, 00:14:10.858 "write_zeroes": true, 00:14:10.858 "zcopy": true, 00:14:10.858 "get_zone_info": false, 00:14:10.858 "zone_management": false, 00:14:10.858 "zone_append": false, 00:14:10.858 "compare": false, 00:14:10.858 "compare_and_write": false, 00:14:10.858 "abort": true, 00:14:10.858 "seek_hole": false, 00:14:10.858 "seek_data": false, 00:14:10.858 "copy": true, 00:14:10.858 "nvme_iov_md": false 00:14:10.858 }, 00:14:10.858 "memory_domains": [ 00:14:10.858 { 00:14:10.858 "dma_device_id": "system", 00:14:10.858 "dma_device_type": 1 00:14:10.858 }, 00:14:10.858 { 00:14:10.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.858 "dma_device_type": 2 00:14:10.858 } 00:14:10.858 ], 00:14:10.858 "driver_specific": {} 00:14:10.858 } 00:14:10.858 ] 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.858 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.118 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.118 "name": "Existed_Raid", 00:14:11.118 "uuid": "b91b2e78-fb01-4d67-97b9-7bb045638d91", 00:14:11.118 "strip_size_kb": 64, 00:14:11.118 "state": "online", 00:14:11.118 "raid_level": "raid5f", 00:14:11.118 "superblock": false, 00:14:11.118 "num_base_bdevs": 4, 00:14:11.118 "num_base_bdevs_discovered": 4, 00:14:11.118 "num_base_bdevs_operational": 4, 00:14:11.118 "base_bdevs_list": [ 00:14:11.118 { 00:14:11.118 "name": "BaseBdev1", 00:14:11.118 "uuid": "477833d2-6410-480a-a328-a8a9d217985f", 00:14:11.118 "is_configured": true, 00:14:11.118 "data_offset": 0, 00:14:11.118 "data_size": 65536 00:14:11.118 }, 00:14:11.118 { 00:14:11.118 "name": "BaseBdev2", 00:14:11.118 "uuid": "4e573669-42a2-4f71-b7a0-1959b4aad381", 00:14:11.118 "is_configured": true, 00:14:11.118 "data_offset": 0, 00:14:11.118 "data_size": 65536 00:14:11.118 }, 00:14:11.118 { 00:14:11.118 "name": "BaseBdev3", 00:14:11.118 "uuid": "15f7c99e-68a6-48f8-9670-4bbf70b1c452", 00:14:11.118 "is_configured": true, 00:14:11.118 "data_offset": 0, 00:14:11.118 "data_size": 65536 00:14:11.118 }, 00:14:11.118 { 00:14:11.118 "name": "BaseBdev4", 00:14:11.118 "uuid": "a47935d6-c98b-4010-87af-d3f0a6d66d13", 00:14:11.118 "is_configured": true, 00:14:11.118 "data_offset": 0, 00:14:11.118 "data_size": 65536 00:14:11.118 } 00:14:11.118 ] 00:14:11.118 }' 00:14:11.118 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.118 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.378 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:11.378 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:11.378 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:11.378 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:11.378 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:11.378 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:11.378 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:11.378 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:11.378 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.378 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.378 [2024-10-15 01:15:23.967670] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:11.378 01:15:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.378 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:11.378 "name": "Existed_Raid", 00:14:11.378 "aliases": [ 00:14:11.378 "b91b2e78-fb01-4d67-97b9-7bb045638d91" 00:14:11.378 ], 00:14:11.378 "product_name": "Raid Volume", 00:14:11.378 "block_size": 512, 00:14:11.378 "num_blocks": 196608, 00:14:11.378 "uuid": "b91b2e78-fb01-4d67-97b9-7bb045638d91", 00:14:11.378 "assigned_rate_limits": { 00:14:11.378 "rw_ios_per_sec": 0, 00:14:11.378 "rw_mbytes_per_sec": 0, 00:14:11.378 "r_mbytes_per_sec": 0, 00:14:11.378 "w_mbytes_per_sec": 0 00:14:11.378 }, 00:14:11.378 "claimed": false, 00:14:11.378 "zoned": false, 00:14:11.378 "supported_io_types": { 00:14:11.378 "read": true, 00:14:11.378 "write": true, 00:14:11.378 "unmap": false, 00:14:11.378 "flush": false, 00:14:11.378 "reset": true, 00:14:11.378 "nvme_admin": false, 00:14:11.378 "nvme_io": false, 00:14:11.378 "nvme_io_md": false, 00:14:11.378 "write_zeroes": true, 00:14:11.378 "zcopy": false, 00:14:11.378 "get_zone_info": false, 00:14:11.378 "zone_management": false, 00:14:11.378 "zone_append": false, 00:14:11.378 "compare": false, 00:14:11.378 "compare_and_write": false, 00:14:11.378 "abort": false, 00:14:11.378 "seek_hole": false, 00:14:11.378 "seek_data": false, 00:14:11.378 "copy": false, 00:14:11.378 "nvme_iov_md": false 00:14:11.378 }, 00:14:11.378 "driver_specific": { 00:14:11.378 "raid": { 00:14:11.378 "uuid": "b91b2e78-fb01-4d67-97b9-7bb045638d91", 00:14:11.378 "strip_size_kb": 64, 00:14:11.378 "state": "online", 00:14:11.378 "raid_level": "raid5f", 00:14:11.378 "superblock": false, 00:14:11.378 "num_base_bdevs": 4, 00:14:11.378 "num_base_bdevs_discovered": 4, 00:14:11.378 "num_base_bdevs_operational": 4, 00:14:11.378 "base_bdevs_list": [ 00:14:11.378 { 00:14:11.378 "name": "BaseBdev1", 00:14:11.378 "uuid": "477833d2-6410-480a-a328-a8a9d217985f", 00:14:11.378 "is_configured": true, 00:14:11.378 "data_offset": 0, 00:14:11.378 "data_size": 65536 00:14:11.378 }, 00:14:11.378 { 00:14:11.378 "name": "BaseBdev2", 00:14:11.378 "uuid": "4e573669-42a2-4f71-b7a0-1959b4aad381", 00:14:11.378 "is_configured": true, 00:14:11.378 "data_offset": 0, 00:14:11.378 "data_size": 65536 00:14:11.378 }, 00:14:11.378 { 00:14:11.378 "name": "BaseBdev3", 00:14:11.378 "uuid": "15f7c99e-68a6-48f8-9670-4bbf70b1c452", 00:14:11.378 "is_configured": true, 00:14:11.378 "data_offset": 0, 00:14:11.378 "data_size": 65536 00:14:11.378 }, 00:14:11.379 { 00:14:11.379 "name": "BaseBdev4", 00:14:11.379 "uuid": "a47935d6-c98b-4010-87af-d3f0a6d66d13", 00:14:11.379 "is_configured": true, 00:14:11.379 "data_offset": 0, 00:14:11.379 "data_size": 65536 00:14:11.379 } 00:14:11.379 ] 00:14:11.379 } 00:14:11.379 } 00:14:11.379 }' 00:14:11.379 01:15:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:11.379 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:11.379 BaseBdev2 00:14:11.379 BaseBdev3 00:14:11.379 BaseBdev4' 00:14:11.379 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.379 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:11.379 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.379 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:11.379 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.379 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.379 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.379 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.639 [2024-10-15 01:15:24.286971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.639 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.640 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.640 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.640 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.640 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.640 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.640 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.640 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.640 "name": "Existed_Raid", 00:14:11.640 "uuid": "b91b2e78-fb01-4d67-97b9-7bb045638d91", 00:14:11.640 "strip_size_kb": 64, 00:14:11.640 "state": "online", 00:14:11.640 "raid_level": "raid5f", 00:14:11.640 "superblock": false, 00:14:11.640 "num_base_bdevs": 4, 00:14:11.640 "num_base_bdevs_discovered": 3, 00:14:11.640 "num_base_bdevs_operational": 3, 00:14:11.640 "base_bdevs_list": [ 00:14:11.640 { 00:14:11.640 "name": null, 00:14:11.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.640 "is_configured": false, 00:14:11.640 "data_offset": 0, 00:14:11.640 "data_size": 65536 00:14:11.640 }, 00:14:11.640 { 00:14:11.640 "name": "BaseBdev2", 00:14:11.640 "uuid": "4e573669-42a2-4f71-b7a0-1959b4aad381", 00:14:11.640 "is_configured": true, 00:14:11.640 "data_offset": 0, 00:14:11.640 "data_size": 65536 00:14:11.640 }, 00:14:11.640 { 00:14:11.640 "name": "BaseBdev3", 00:14:11.640 "uuid": "15f7c99e-68a6-48f8-9670-4bbf70b1c452", 00:14:11.640 "is_configured": true, 00:14:11.640 "data_offset": 0, 00:14:11.640 "data_size": 65536 00:14:11.640 }, 00:14:11.640 { 00:14:11.640 "name": "BaseBdev4", 00:14:11.640 "uuid": "a47935d6-c98b-4010-87af-d3f0a6d66d13", 00:14:11.640 "is_configured": true, 00:14:11.640 "data_offset": 0, 00:14:11.640 "data_size": 65536 00:14:11.640 } 00:14:11.640 ] 00:14:11.640 }' 00:14:11.640 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.640 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.210 [2024-10-15 01:15:24.801359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:12.210 [2024-10-15 01:15:24.801491] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:12.210 [2024-10-15 01:15:24.812766] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.210 [2024-10-15 01:15:24.848743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.210 [2024-10-15 01:15:24.899904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:12.210 [2024-10-15 01:15:24.899994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.210 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:12.211 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:12.211 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.211 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:12.211 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.211 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.211 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.471 BaseBdev2 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.471 01:15:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.471 [ 00:14:12.471 { 00:14:12.471 "name": "BaseBdev2", 00:14:12.471 "aliases": [ 00:14:12.471 "43b58e14-e369-4abe-8602-e7e394a8eb43" 00:14:12.471 ], 00:14:12.471 "product_name": "Malloc disk", 00:14:12.471 "block_size": 512, 00:14:12.471 "num_blocks": 65536, 00:14:12.471 "uuid": "43b58e14-e369-4abe-8602-e7e394a8eb43", 00:14:12.471 "assigned_rate_limits": { 00:14:12.471 "rw_ios_per_sec": 0, 00:14:12.471 "rw_mbytes_per_sec": 0, 00:14:12.471 "r_mbytes_per_sec": 0, 00:14:12.471 "w_mbytes_per_sec": 0 00:14:12.471 }, 00:14:12.471 "claimed": false, 00:14:12.471 "zoned": false, 00:14:12.471 "supported_io_types": { 00:14:12.471 "read": true, 00:14:12.471 "write": true, 00:14:12.471 "unmap": true, 00:14:12.471 "flush": true, 00:14:12.471 "reset": true, 00:14:12.471 "nvme_admin": false, 00:14:12.471 "nvme_io": false, 00:14:12.471 "nvme_io_md": false, 00:14:12.471 "write_zeroes": true, 00:14:12.471 "zcopy": true, 00:14:12.471 "get_zone_info": false, 00:14:12.471 "zone_management": false, 00:14:12.471 "zone_append": false, 00:14:12.471 "compare": false, 00:14:12.471 "compare_and_write": false, 00:14:12.471 "abort": true, 00:14:12.471 "seek_hole": false, 00:14:12.471 "seek_data": false, 00:14:12.471 "copy": true, 00:14:12.471 "nvme_iov_md": false 00:14:12.471 }, 00:14:12.471 "memory_domains": [ 00:14:12.471 { 00:14:12.471 "dma_device_id": "system", 00:14:12.471 "dma_device_type": 1 00:14:12.471 }, 00:14:12.471 { 00:14:12.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.471 "dma_device_type": 2 00:14:12.471 } 00:14:12.471 ], 00:14:12.471 "driver_specific": {} 00:14:12.471 } 00:14:12.471 ] 00:14:12.471 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.471 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:12.471 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.472 BaseBdev3 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.472 [ 00:14:12.472 { 00:14:12.472 "name": "BaseBdev3", 00:14:12.472 "aliases": [ 00:14:12.472 "231e2d56-7ad1-44dc-ae8a-3c08a5d2f487" 00:14:12.472 ], 00:14:12.472 "product_name": "Malloc disk", 00:14:12.472 "block_size": 512, 00:14:12.472 "num_blocks": 65536, 00:14:12.472 "uuid": "231e2d56-7ad1-44dc-ae8a-3c08a5d2f487", 00:14:12.472 "assigned_rate_limits": { 00:14:12.472 "rw_ios_per_sec": 0, 00:14:12.472 "rw_mbytes_per_sec": 0, 00:14:12.472 "r_mbytes_per_sec": 0, 00:14:12.472 "w_mbytes_per_sec": 0 00:14:12.472 }, 00:14:12.472 "claimed": false, 00:14:12.472 "zoned": false, 00:14:12.472 "supported_io_types": { 00:14:12.472 "read": true, 00:14:12.472 "write": true, 00:14:12.472 "unmap": true, 00:14:12.472 "flush": true, 00:14:12.472 "reset": true, 00:14:12.472 "nvme_admin": false, 00:14:12.472 "nvme_io": false, 00:14:12.472 "nvme_io_md": false, 00:14:12.472 "write_zeroes": true, 00:14:12.472 "zcopy": true, 00:14:12.472 "get_zone_info": false, 00:14:12.472 "zone_management": false, 00:14:12.472 "zone_append": false, 00:14:12.472 "compare": false, 00:14:12.472 "compare_and_write": false, 00:14:12.472 "abort": true, 00:14:12.472 "seek_hole": false, 00:14:12.472 "seek_data": false, 00:14:12.472 "copy": true, 00:14:12.472 "nvme_iov_md": false 00:14:12.472 }, 00:14:12.472 "memory_domains": [ 00:14:12.472 { 00:14:12.472 "dma_device_id": "system", 00:14:12.472 "dma_device_type": 1 00:14:12.472 }, 00:14:12.472 { 00:14:12.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.472 "dma_device_type": 2 00:14:12.472 } 00:14:12.472 ], 00:14:12.472 "driver_specific": {} 00:14:12.472 } 00:14:12.472 ] 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.472 BaseBdev4 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.472 [ 00:14:12.472 { 00:14:12.472 "name": "BaseBdev4", 00:14:12.472 "aliases": [ 00:14:12.472 "3a27a9af-180f-4fcb-9eaa-2761f8b34350" 00:14:12.472 ], 00:14:12.472 "product_name": "Malloc disk", 00:14:12.472 "block_size": 512, 00:14:12.472 "num_blocks": 65536, 00:14:12.472 "uuid": "3a27a9af-180f-4fcb-9eaa-2761f8b34350", 00:14:12.472 "assigned_rate_limits": { 00:14:12.472 "rw_ios_per_sec": 0, 00:14:12.472 "rw_mbytes_per_sec": 0, 00:14:12.472 "r_mbytes_per_sec": 0, 00:14:12.472 "w_mbytes_per_sec": 0 00:14:12.472 }, 00:14:12.472 "claimed": false, 00:14:12.472 "zoned": false, 00:14:12.472 "supported_io_types": { 00:14:12.472 "read": true, 00:14:12.472 "write": true, 00:14:12.472 "unmap": true, 00:14:12.472 "flush": true, 00:14:12.472 "reset": true, 00:14:12.472 "nvme_admin": false, 00:14:12.472 "nvme_io": false, 00:14:12.472 "nvme_io_md": false, 00:14:12.472 "write_zeroes": true, 00:14:12.472 "zcopy": true, 00:14:12.472 "get_zone_info": false, 00:14:12.472 "zone_management": false, 00:14:12.472 "zone_append": false, 00:14:12.472 "compare": false, 00:14:12.472 "compare_and_write": false, 00:14:12.472 "abort": true, 00:14:12.472 "seek_hole": false, 00:14:12.472 "seek_data": false, 00:14:12.472 "copy": true, 00:14:12.472 "nvme_iov_md": false 00:14:12.472 }, 00:14:12.472 "memory_domains": [ 00:14:12.472 { 00:14:12.472 "dma_device_id": "system", 00:14:12.472 "dma_device_type": 1 00:14:12.472 }, 00:14:12.472 { 00:14:12.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.472 "dma_device_type": 2 00:14:12.472 } 00:14:12.472 ], 00:14:12.472 "driver_specific": {} 00:14:12.472 } 00:14:12.472 ] 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.472 [2024-10-15 01:15:25.128214] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:12.472 [2024-10-15 01:15:25.128295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:12.472 [2024-10-15 01:15:25.128335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.472 [2024-10-15 01:15:25.130109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:12.472 [2024-10-15 01:15:25.130206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.472 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.472 "name": "Existed_Raid", 00:14:12.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.472 "strip_size_kb": 64, 00:14:12.472 "state": "configuring", 00:14:12.473 "raid_level": "raid5f", 00:14:12.473 "superblock": false, 00:14:12.473 "num_base_bdevs": 4, 00:14:12.473 "num_base_bdevs_discovered": 3, 00:14:12.473 "num_base_bdevs_operational": 4, 00:14:12.473 "base_bdevs_list": [ 00:14:12.473 { 00:14:12.473 "name": "BaseBdev1", 00:14:12.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.473 "is_configured": false, 00:14:12.473 "data_offset": 0, 00:14:12.473 "data_size": 0 00:14:12.473 }, 00:14:12.473 { 00:14:12.473 "name": "BaseBdev2", 00:14:12.473 "uuid": "43b58e14-e369-4abe-8602-e7e394a8eb43", 00:14:12.473 "is_configured": true, 00:14:12.473 "data_offset": 0, 00:14:12.473 "data_size": 65536 00:14:12.473 }, 00:14:12.473 { 00:14:12.473 "name": "BaseBdev3", 00:14:12.473 "uuid": "231e2d56-7ad1-44dc-ae8a-3c08a5d2f487", 00:14:12.473 "is_configured": true, 00:14:12.473 "data_offset": 0, 00:14:12.473 "data_size": 65536 00:14:12.473 }, 00:14:12.473 { 00:14:12.473 "name": "BaseBdev4", 00:14:12.473 "uuid": "3a27a9af-180f-4fcb-9eaa-2761f8b34350", 00:14:12.473 "is_configured": true, 00:14:12.473 "data_offset": 0, 00:14:12.473 "data_size": 65536 00:14:12.473 } 00:14:12.473 ] 00:14:12.473 }' 00:14:12.473 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.473 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.043 [2024-10-15 01:15:25.567582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.043 "name": "Existed_Raid", 00:14:13.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.043 "strip_size_kb": 64, 00:14:13.043 "state": "configuring", 00:14:13.043 "raid_level": "raid5f", 00:14:13.043 "superblock": false, 00:14:13.043 "num_base_bdevs": 4, 00:14:13.043 "num_base_bdevs_discovered": 2, 00:14:13.043 "num_base_bdevs_operational": 4, 00:14:13.043 "base_bdevs_list": [ 00:14:13.043 { 00:14:13.043 "name": "BaseBdev1", 00:14:13.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.043 "is_configured": false, 00:14:13.043 "data_offset": 0, 00:14:13.043 "data_size": 0 00:14:13.043 }, 00:14:13.043 { 00:14:13.043 "name": null, 00:14:13.043 "uuid": "43b58e14-e369-4abe-8602-e7e394a8eb43", 00:14:13.043 "is_configured": false, 00:14:13.043 "data_offset": 0, 00:14:13.043 "data_size": 65536 00:14:13.043 }, 00:14:13.043 { 00:14:13.043 "name": "BaseBdev3", 00:14:13.043 "uuid": "231e2d56-7ad1-44dc-ae8a-3c08a5d2f487", 00:14:13.043 "is_configured": true, 00:14:13.043 "data_offset": 0, 00:14:13.043 "data_size": 65536 00:14:13.043 }, 00:14:13.043 { 00:14:13.043 "name": "BaseBdev4", 00:14:13.043 "uuid": "3a27a9af-180f-4fcb-9eaa-2761f8b34350", 00:14:13.043 "is_configured": true, 00:14:13.043 "data_offset": 0, 00:14:13.043 "data_size": 65536 00:14:13.043 } 00:14:13.043 ] 00:14:13.043 }' 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.043 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.303 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.303 01:15:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:13.303 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.303 01:15:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.303 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.564 [2024-10-15 01:15:26.053888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:13.564 BaseBdev1 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.564 [ 00:14:13.564 { 00:14:13.564 "name": "BaseBdev1", 00:14:13.564 "aliases": [ 00:14:13.564 "6ba64725-821e-4a23-893b-ba04fd4f6c78" 00:14:13.564 ], 00:14:13.564 "product_name": "Malloc disk", 00:14:13.564 "block_size": 512, 00:14:13.564 "num_blocks": 65536, 00:14:13.564 "uuid": "6ba64725-821e-4a23-893b-ba04fd4f6c78", 00:14:13.564 "assigned_rate_limits": { 00:14:13.564 "rw_ios_per_sec": 0, 00:14:13.564 "rw_mbytes_per_sec": 0, 00:14:13.564 "r_mbytes_per_sec": 0, 00:14:13.564 "w_mbytes_per_sec": 0 00:14:13.564 }, 00:14:13.564 "claimed": true, 00:14:13.564 "claim_type": "exclusive_write", 00:14:13.564 "zoned": false, 00:14:13.564 "supported_io_types": { 00:14:13.564 "read": true, 00:14:13.564 "write": true, 00:14:13.564 "unmap": true, 00:14:13.564 "flush": true, 00:14:13.564 "reset": true, 00:14:13.564 "nvme_admin": false, 00:14:13.564 "nvme_io": false, 00:14:13.564 "nvme_io_md": false, 00:14:13.564 "write_zeroes": true, 00:14:13.564 "zcopy": true, 00:14:13.564 "get_zone_info": false, 00:14:13.564 "zone_management": false, 00:14:13.564 "zone_append": false, 00:14:13.564 "compare": false, 00:14:13.564 "compare_and_write": false, 00:14:13.564 "abort": true, 00:14:13.564 "seek_hole": false, 00:14:13.564 "seek_data": false, 00:14:13.564 "copy": true, 00:14:13.564 "nvme_iov_md": false 00:14:13.564 }, 00:14:13.564 "memory_domains": [ 00:14:13.564 { 00:14:13.564 "dma_device_id": "system", 00:14:13.564 "dma_device_type": 1 00:14:13.564 }, 00:14:13.564 { 00:14:13.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.564 "dma_device_type": 2 00:14:13.564 } 00:14:13.564 ], 00:14:13.564 "driver_specific": {} 00:14:13.564 } 00:14:13.564 ] 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.564 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.564 "name": "Existed_Raid", 00:14:13.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.564 "strip_size_kb": 64, 00:14:13.564 "state": "configuring", 00:14:13.564 "raid_level": "raid5f", 00:14:13.564 "superblock": false, 00:14:13.564 "num_base_bdevs": 4, 00:14:13.564 "num_base_bdevs_discovered": 3, 00:14:13.564 "num_base_bdevs_operational": 4, 00:14:13.564 "base_bdevs_list": [ 00:14:13.564 { 00:14:13.564 "name": "BaseBdev1", 00:14:13.564 "uuid": "6ba64725-821e-4a23-893b-ba04fd4f6c78", 00:14:13.564 "is_configured": true, 00:14:13.564 "data_offset": 0, 00:14:13.564 "data_size": 65536 00:14:13.564 }, 00:14:13.564 { 00:14:13.564 "name": null, 00:14:13.564 "uuid": "43b58e14-e369-4abe-8602-e7e394a8eb43", 00:14:13.564 "is_configured": false, 00:14:13.564 "data_offset": 0, 00:14:13.564 "data_size": 65536 00:14:13.564 }, 00:14:13.564 { 00:14:13.565 "name": "BaseBdev3", 00:14:13.565 "uuid": "231e2d56-7ad1-44dc-ae8a-3c08a5d2f487", 00:14:13.565 "is_configured": true, 00:14:13.565 "data_offset": 0, 00:14:13.565 "data_size": 65536 00:14:13.565 }, 00:14:13.565 { 00:14:13.565 "name": "BaseBdev4", 00:14:13.565 "uuid": "3a27a9af-180f-4fcb-9eaa-2761f8b34350", 00:14:13.565 "is_configured": true, 00:14:13.565 "data_offset": 0, 00:14:13.565 "data_size": 65536 00:14:13.565 } 00:14:13.565 ] 00:14:13.565 }' 00:14:13.565 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.565 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.825 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.825 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:13.825 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.825 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.085 [2024-10-15 01:15:26.585038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.085 "name": "Existed_Raid", 00:14:14.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.085 "strip_size_kb": 64, 00:14:14.085 "state": "configuring", 00:14:14.085 "raid_level": "raid5f", 00:14:14.085 "superblock": false, 00:14:14.085 "num_base_bdevs": 4, 00:14:14.085 "num_base_bdevs_discovered": 2, 00:14:14.085 "num_base_bdevs_operational": 4, 00:14:14.085 "base_bdevs_list": [ 00:14:14.085 { 00:14:14.085 "name": "BaseBdev1", 00:14:14.085 "uuid": "6ba64725-821e-4a23-893b-ba04fd4f6c78", 00:14:14.085 "is_configured": true, 00:14:14.085 "data_offset": 0, 00:14:14.085 "data_size": 65536 00:14:14.085 }, 00:14:14.085 { 00:14:14.085 "name": null, 00:14:14.085 "uuid": "43b58e14-e369-4abe-8602-e7e394a8eb43", 00:14:14.085 "is_configured": false, 00:14:14.085 "data_offset": 0, 00:14:14.085 "data_size": 65536 00:14:14.085 }, 00:14:14.085 { 00:14:14.085 "name": null, 00:14:14.085 "uuid": "231e2d56-7ad1-44dc-ae8a-3c08a5d2f487", 00:14:14.085 "is_configured": false, 00:14:14.085 "data_offset": 0, 00:14:14.085 "data_size": 65536 00:14:14.085 }, 00:14:14.085 { 00:14:14.085 "name": "BaseBdev4", 00:14:14.085 "uuid": "3a27a9af-180f-4fcb-9eaa-2761f8b34350", 00:14:14.085 "is_configured": true, 00:14:14.085 "data_offset": 0, 00:14:14.085 "data_size": 65536 00:14:14.085 } 00:14:14.085 ] 00:14:14.085 }' 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.085 01:15:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.345 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.345 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.345 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.345 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:14.345 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.345 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:14.345 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:14.345 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.345 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.345 [2024-10-15 01:15:27.064280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:14.345 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.345 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:14.605 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.605 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.605 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.605 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.605 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.605 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.605 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.605 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.605 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.605 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.605 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.605 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.605 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.605 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.605 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.605 "name": "Existed_Raid", 00:14:14.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.605 "strip_size_kb": 64, 00:14:14.605 "state": "configuring", 00:14:14.605 "raid_level": "raid5f", 00:14:14.605 "superblock": false, 00:14:14.605 "num_base_bdevs": 4, 00:14:14.605 "num_base_bdevs_discovered": 3, 00:14:14.605 "num_base_bdevs_operational": 4, 00:14:14.605 "base_bdevs_list": [ 00:14:14.605 { 00:14:14.605 "name": "BaseBdev1", 00:14:14.605 "uuid": "6ba64725-821e-4a23-893b-ba04fd4f6c78", 00:14:14.605 "is_configured": true, 00:14:14.605 "data_offset": 0, 00:14:14.605 "data_size": 65536 00:14:14.605 }, 00:14:14.605 { 00:14:14.605 "name": null, 00:14:14.605 "uuid": "43b58e14-e369-4abe-8602-e7e394a8eb43", 00:14:14.605 "is_configured": false, 00:14:14.605 "data_offset": 0, 00:14:14.605 "data_size": 65536 00:14:14.605 }, 00:14:14.605 { 00:14:14.605 "name": "BaseBdev3", 00:14:14.605 "uuid": "231e2d56-7ad1-44dc-ae8a-3c08a5d2f487", 00:14:14.605 "is_configured": true, 00:14:14.605 "data_offset": 0, 00:14:14.605 "data_size": 65536 00:14:14.605 }, 00:14:14.605 { 00:14:14.605 "name": "BaseBdev4", 00:14:14.605 "uuid": "3a27a9af-180f-4fcb-9eaa-2761f8b34350", 00:14:14.605 "is_configured": true, 00:14:14.605 "data_offset": 0, 00:14:14.605 "data_size": 65536 00:14:14.605 } 00:14:14.605 ] 00:14:14.605 }' 00:14:14.605 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.605 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.866 [2024-10-15 01:15:27.531475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.866 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.126 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.126 "name": "Existed_Raid", 00:14:15.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.126 "strip_size_kb": 64, 00:14:15.126 "state": "configuring", 00:14:15.126 "raid_level": "raid5f", 00:14:15.126 "superblock": false, 00:14:15.126 "num_base_bdevs": 4, 00:14:15.126 "num_base_bdevs_discovered": 2, 00:14:15.126 "num_base_bdevs_operational": 4, 00:14:15.126 "base_bdevs_list": [ 00:14:15.126 { 00:14:15.126 "name": null, 00:14:15.126 "uuid": "6ba64725-821e-4a23-893b-ba04fd4f6c78", 00:14:15.126 "is_configured": false, 00:14:15.126 "data_offset": 0, 00:14:15.126 "data_size": 65536 00:14:15.126 }, 00:14:15.126 { 00:14:15.126 "name": null, 00:14:15.126 "uuid": "43b58e14-e369-4abe-8602-e7e394a8eb43", 00:14:15.126 "is_configured": false, 00:14:15.126 "data_offset": 0, 00:14:15.126 "data_size": 65536 00:14:15.126 }, 00:14:15.126 { 00:14:15.126 "name": "BaseBdev3", 00:14:15.126 "uuid": "231e2d56-7ad1-44dc-ae8a-3c08a5d2f487", 00:14:15.126 "is_configured": true, 00:14:15.126 "data_offset": 0, 00:14:15.126 "data_size": 65536 00:14:15.126 }, 00:14:15.126 { 00:14:15.126 "name": "BaseBdev4", 00:14:15.126 "uuid": "3a27a9af-180f-4fcb-9eaa-2761f8b34350", 00:14:15.126 "is_configured": true, 00:14:15.126 "data_offset": 0, 00:14:15.126 "data_size": 65536 00:14:15.126 } 00:14:15.126 ] 00:14:15.126 }' 00:14:15.126 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.126 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.388 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.388 01:15:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:15.388 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.388 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.388 01:15:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.388 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:15.388 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:15.388 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.388 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.388 [2024-10-15 01:15:28.017155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.388 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.388 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:15.388 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.388 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.388 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.388 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.388 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.388 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.388 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.388 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.388 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.388 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.388 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.388 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.388 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.388 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.388 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.388 "name": "Existed_Raid", 00:14:15.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.388 "strip_size_kb": 64, 00:14:15.388 "state": "configuring", 00:14:15.388 "raid_level": "raid5f", 00:14:15.388 "superblock": false, 00:14:15.388 "num_base_bdevs": 4, 00:14:15.388 "num_base_bdevs_discovered": 3, 00:14:15.388 "num_base_bdevs_operational": 4, 00:14:15.388 "base_bdevs_list": [ 00:14:15.388 { 00:14:15.388 "name": null, 00:14:15.388 "uuid": "6ba64725-821e-4a23-893b-ba04fd4f6c78", 00:14:15.388 "is_configured": false, 00:14:15.388 "data_offset": 0, 00:14:15.388 "data_size": 65536 00:14:15.388 }, 00:14:15.388 { 00:14:15.388 "name": "BaseBdev2", 00:14:15.389 "uuid": "43b58e14-e369-4abe-8602-e7e394a8eb43", 00:14:15.389 "is_configured": true, 00:14:15.389 "data_offset": 0, 00:14:15.389 "data_size": 65536 00:14:15.389 }, 00:14:15.389 { 00:14:15.389 "name": "BaseBdev3", 00:14:15.389 "uuid": "231e2d56-7ad1-44dc-ae8a-3c08a5d2f487", 00:14:15.389 "is_configured": true, 00:14:15.389 "data_offset": 0, 00:14:15.389 "data_size": 65536 00:14:15.389 }, 00:14:15.389 { 00:14:15.389 "name": "BaseBdev4", 00:14:15.389 "uuid": "3a27a9af-180f-4fcb-9eaa-2761f8b34350", 00:14:15.389 "is_configured": true, 00:14:15.389 "data_offset": 0, 00:14:15.389 "data_size": 65536 00:14:15.389 } 00:14:15.389 ] 00:14:15.389 }' 00:14:15.389 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.389 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.960 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.960 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.960 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.960 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:15.960 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.960 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:15.960 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.960 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6ba64725-821e-4a23-893b-ba04fd4f6c78 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.961 [2024-10-15 01:15:28.543359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:15.961 [2024-10-15 01:15:28.543473] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:15.961 [2024-10-15 01:15:28.543497] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:15.961 [2024-10-15 01:15:28.543820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:14:15.961 [2024-10-15 01:15:28.544346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:15.961 [2024-10-15 01:15:28.544401] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:14:15.961 [2024-10-15 01:15:28.544615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.961 NewBaseBdev 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.961 [ 00:14:15.961 { 00:14:15.961 "name": "NewBaseBdev", 00:14:15.961 "aliases": [ 00:14:15.961 "6ba64725-821e-4a23-893b-ba04fd4f6c78" 00:14:15.961 ], 00:14:15.961 "product_name": "Malloc disk", 00:14:15.961 "block_size": 512, 00:14:15.961 "num_blocks": 65536, 00:14:15.961 "uuid": "6ba64725-821e-4a23-893b-ba04fd4f6c78", 00:14:15.961 "assigned_rate_limits": { 00:14:15.961 "rw_ios_per_sec": 0, 00:14:15.961 "rw_mbytes_per_sec": 0, 00:14:15.961 "r_mbytes_per_sec": 0, 00:14:15.961 "w_mbytes_per_sec": 0 00:14:15.961 }, 00:14:15.961 "claimed": true, 00:14:15.961 "claim_type": "exclusive_write", 00:14:15.961 "zoned": false, 00:14:15.961 "supported_io_types": { 00:14:15.961 "read": true, 00:14:15.961 "write": true, 00:14:15.961 "unmap": true, 00:14:15.961 "flush": true, 00:14:15.961 "reset": true, 00:14:15.961 "nvme_admin": false, 00:14:15.961 "nvme_io": false, 00:14:15.961 "nvme_io_md": false, 00:14:15.961 "write_zeroes": true, 00:14:15.961 "zcopy": true, 00:14:15.961 "get_zone_info": false, 00:14:15.961 "zone_management": false, 00:14:15.961 "zone_append": false, 00:14:15.961 "compare": false, 00:14:15.961 "compare_and_write": false, 00:14:15.961 "abort": true, 00:14:15.961 "seek_hole": false, 00:14:15.961 "seek_data": false, 00:14:15.961 "copy": true, 00:14:15.961 "nvme_iov_md": false 00:14:15.961 }, 00:14:15.961 "memory_domains": [ 00:14:15.961 { 00:14:15.961 "dma_device_id": "system", 00:14:15.961 "dma_device_type": 1 00:14:15.961 }, 00:14:15.961 { 00:14:15.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.961 "dma_device_type": 2 00:14:15.961 } 00:14:15.961 ], 00:14:15.961 "driver_specific": {} 00:14:15.961 } 00:14:15.961 ] 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.961 "name": "Existed_Raid", 00:14:15.961 "uuid": "c23563d9-d589-41fc-992f-90e3063a0d5c", 00:14:15.961 "strip_size_kb": 64, 00:14:15.961 "state": "online", 00:14:15.961 "raid_level": "raid5f", 00:14:15.961 "superblock": false, 00:14:15.961 "num_base_bdevs": 4, 00:14:15.961 "num_base_bdevs_discovered": 4, 00:14:15.961 "num_base_bdevs_operational": 4, 00:14:15.961 "base_bdevs_list": [ 00:14:15.961 { 00:14:15.961 "name": "NewBaseBdev", 00:14:15.961 "uuid": "6ba64725-821e-4a23-893b-ba04fd4f6c78", 00:14:15.961 "is_configured": true, 00:14:15.961 "data_offset": 0, 00:14:15.961 "data_size": 65536 00:14:15.961 }, 00:14:15.961 { 00:14:15.961 "name": "BaseBdev2", 00:14:15.961 "uuid": "43b58e14-e369-4abe-8602-e7e394a8eb43", 00:14:15.961 "is_configured": true, 00:14:15.961 "data_offset": 0, 00:14:15.961 "data_size": 65536 00:14:15.961 }, 00:14:15.961 { 00:14:15.961 "name": "BaseBdev3", 00:14:15.961 "uuid": "231e2d56-7ad1-44dc-ae8a-3c08a5d2f487", 00:14:15.961 "is_configured": true, 00:14:15.961 "data_offset": 0, 00:14:15.961 "data_size": 65536 00:14:15.961 }, 00:14:15.961 { 00:14:15.961 "name": "BaseBdev4", 00:14:15.961 "uuid": "3a27a9af-180f-4fcb-9eaa-2761f8b34350", 00:14:15.961 "is_configured": true, 00:14:15.961 "data_offset": 0, 00:14:15.961 "data_size": 65536 00:14:15.961 } 00:14:15.961 ] 00:14:15.961 }' 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.961 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.533 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:16.533 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:16.533 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:16.533 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:16.533 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:16.533 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:16.533 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:16.533 01:15:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:16.533 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.533 01:15:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.533 [2024-10-15 01:15:28.982892] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:16.533 "name": "Existed_Raid", 00:14:16.533 "aliases": [ 00:14:16.533 "c23563d9-d589-41fc-992f-90e3063a0d5c" 00:14:16.533 ], 00:14:16.533 "product_name": "Raid Volume", 00:14:16.533 "block_size": 512, 00:14:16.533 "num_blocks": 196608, 00:14:16.533 "uuid": "c23563d9-d589-41fc-992f-90e3063a0d5c", 00:14:16.533 "assigned_rate_limits": { 00:14:16.533 "rw_ios_per_sec": 0, 00:14:16.533 "rw_mbytes_per_sec": 0, 00:14:16.533 "r_mbytes_per_sec": 0, 00:14:16.533 "w_mbytes_per_sec": 0 00:14:16.533 }, 00:14:16.533 "claimed": false, 00:14:16.533 "zoned": false, 00:14:16.533 "supported_io_types": { 00:14:16.533 "read": true, 00:14:16.533 "write": true, 00:14:16.533 "unmap": false, 00:14:16.533 "flush": false, 00:14:16.533 "reset": true, 00:14:16.533 "nvme_admin": false, 00:14:16.533 "nvme_io": false, 00:14:16.533 "nvme_io_md": false, 00:14:16.533 "write_zeroes": true, 00:14:16.533 "zcopy": false, 00:14:16.533 "get_zone_info": false, 00:14:16.533 "zone_management": false, 00:14:16.533 "zone_append": false, 00:14:16.533 "compare": false, 00:14:16.533 "compare_and_write": false, 00:14:16.533 "abort": false, 00:14:16.533 "seek_hole": false, 00:14:16.533 "seek_data": false, 00:14:16.533 "copy": false, 00:14:16.533 "nvme_iov_md": false 00:14:16.533 }, 00:14:16.533 "driver_specific": { 00:14:16.533 "raid": { 00:14:16.533 "uuid": "c23563d9-d589-41fc-992f-90e3063a0d5c", 00:14:16.533 "strip_size_kb": 64, 00:14:16.533 "state": "online", 00:14:16.533 "raid_level": "raid5f", 00:14:16.533 "superblock": false, 00:14:16.533 "num_base_bdevs": 4, 00:14:16.533 "num_base_bdevs_discovered": 4, 00:14:16.533 "num_base_bdevs_operational": 4, 00:14:16.533 "base_bdevs_list": [ 00:14:16.533 { 00:14:16.533 "name": "NewBaseBdev", 00:14:16.533 "uuid": "6ba64725-821e-4a23-893b-ba04fd4f6c78", 00:14:16.533 "is_configured": true, 00:14:16.533 "data_offset": 0, 00:14:16.533 "data_size": 65536 00:14:16.533 }, 00:14:16.533 { 00:14:16.533 "name": "BaseBdev2", 00:14:16.533 "uuid": "43b58e14-e369-4abe-8602-e7e394a8eb43", 00:14:16.533 "is_configured": true, 00:14:16.533 "data_offset": 0, 00:14:16.533 "data_size": 65536 00:14:16.533 }, 00:14:16.533 { 00:14:16.533 "name": "BaseBdev3", 00:14:16.533 "uuid": "231e2d56-7ad1-44dc-ae8a-3c08a5d2f487", 00:14:16.533 "is_configured": true, 00:14:16.533 "data_offset": 0, 00:14:16.533 "data_size": 65536 00:14:16.533 }, 00:14:16.533 { 00:14:16.533 "name": "BaseBdev4", 00:14:16.533 "uuid": "3a27a9af-180f-4fcb-9eaa-2761f8b34350", 00:14:16.533 "is_configured": true, 00:14:16.533 "data_offset": 0, 00:14:16.533 "data_size": 65536 00:14:16.533 } 00:14:16.533 ] 00:14:16.533 } 00:14:16.533 } 00:14:16.533 }' 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:16.533 BaseBdev2 00:14:16.533 BaseBdev3 00:14:16.533 BaseBdev4' 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.533 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.794 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.794 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.794 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:16.794 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.794 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.794 [2024-10-15 01:15:29.274191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:16.794 [2024-10-15 01:15:29.274221] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:16.794 [2024-10-15 01:15:29.274304] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.794 [2024-10-15 01:15:29.274556] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:16.794 [2024-10-15 01:15:29.274566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:14:16.794 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.794 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 92983 00:14:16.794 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 92983 ']' 00:14:16.794 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 92983 00:14:16.794 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:16.794 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:16.794 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92983 00:14:16.794 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:16.794 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:16.794 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92983' 00:14:16.794 killing process with pid 92983 00:14:16.794 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 92983 00:14:16.794 [2024-10-15 01:15:29.327678] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:16.794 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 92983 00:14:16.794 [2024-10-15 01:15:29.368400] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:17.054 01:15:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:17.054 00:14:17.054 real 0m9.527s 00:14:17.054 user 0m16.312s 00:14:17.054 sys 0m2.029s 00:14:17.054 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:17.054 ************************************ 00:14:17.054 END TEST raid5f_state_function_test 00:14:17.054 ************************************ 00:14:17.054 01:15:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.054 01:15:29 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:14:17.054 01:15:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:17.054 01:15:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:17.054 01:15:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:17.054 ************************************ 00:14:17.054 START TEST raid5f_state_function_test_sb 00:14:17.054 ************************************ 00:14:17.054 01:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:14:17.054 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:17.054 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:17.054 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:17.054 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=93633 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93633' 00:14:17.055 Process raid pid: 93633 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 93633 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 93633 ']' 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:17.055 01:15:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.055 [2024-10-15 01:15:29.742098] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:14:17.055 [2024-10-15 01:15:29.742334] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.315 [2024-10-15 01:15:29.884887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.315 [2024-10-15 01:15:29.914027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.315 [2024-10-15 01:15:29.956607] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.315 [2024-10-15 01:15:29.956735] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.884 01:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:17.884 01:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:17.884 01:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:17.884 01:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.884 01:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.884 [2024-10-15 01:15:30.594401] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:17.884 [2024-10-15 01:15:30.594450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:17.884 [2024-10-15 01:15:30.594461] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:17.884 [2024-10-15 01:15:30.594470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:17.884 [2024-10-15 01:15:30.594476] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:17.884 [2024-10-15 01:15:30.594488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:17.884 [2024-10-15 01:15:30.594494] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:17.884 [2024-10-15 01:15:30.594503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:17.884 01:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.885 01:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:17.885 01:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.885 01:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.885 01:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.885 01:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.885 01:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.885 01:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.885 01:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.885 01:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.885 01:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.885 01:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.885 01:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.885 01:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.885 01:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.144 01:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.144 01:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.144 "name": "Existed_Raid", 00:14:18.144 "uuid": "00e61f9c-88d9-4302-8a70-f98c24c04349", 00:14:18.144 "strip_size_kb": 64, 00:14:18.144 "state": "configuring", 00:14:18.145 "raid_level": "raid5f", 00:14:18.145 "superblock": true, 00:14:18.145 "num_base_bdevs": 4, 00:14:18.145 "num_base_bdevs_discovered": 0, 00:14:18.145 "num_base_bdevs_operational": 4, 00:14:18.145 "base_bdevs_list": [ 00:14:18.145 { 00:14:18.145 "name": "BaseBdev1", 00:14:18.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.145 "is_configured": false, 00:14:18.145 "data_offset": 0, 00:14:18.145 "data_size": 0 00:14:18.145 }, 00:14:18.145 { 00:14:18.145 "name": "BaseBdev2", 00:14:18.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.145 "is_configured": false, 00:14:18.145 "data_offset": 0, 00:14:18.145 "data_size": 0 00:14:18.145 }, 00:14:18.145 { 00:14:18.145 "name": "BaseBdev3", 00:14:18.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.145 "is_configured": false, 00:14:18.145 "data_offset": 0, 00:14:18.145 "data_size": 0 00:14:18.145 }, 00:14:18.145 { 00:14:18.145 "name": "BaseBdev4", 00:14:18.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.145 "is_configured": false, 00:14:18.145 "data_offset": 0, 00:14:18.145 "data_size": 0 00:14:18.145 } 00:14:18.145 ] 00:14:18.145 }' 00:14:18.145 01:15:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.145 01:15:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.405 [2024-10-15 01:15:31.025579] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:18.405 [2024-10-15 01:15:31.025667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.405 [2024-10-15 01:15:31.033601] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:18.405 [2024-10-15 01:15:31.033676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:18.405 [2024-10-15 01:15:31.033703] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:18.405 [2024-10-15 01:15:31.033725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:18.405 [2024-10-15 01:15:31.033743] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:18.405 [2024-10-15 01:15:31.033763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:18.405 [2024-10-15 01:15:31.033780] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:18.405 [2024-10-15 01:15:31.033801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.405 [2024-10-15 01:15:31.050802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:18.405 BaseBdev1 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.405 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.405 [ 00:14:18.405 { 00:14:18.405 "name": "BaseBdev1", 00:14:18.405 "aliases": [ 00:14:18.405 "f4882b1e-5ef1-404a-9170-2129401f4599" 00:14:18.405 ], 00:14:18.406 "product_name": "Malloc disk", 00:14:18.406 "block_size": 512, 00:14:18.406 "num_blocks": 65536, 00:14:18.406 "uuid": "f4882b1e-5ef1-404a-9170-2129401f4599", 00:14:18.406 "assigned_rate_limits": { 00:14:18.406 "rw_ios_per_sec": 0, 00:14:18.406 "rw_mbytes_per_sec": 0, 00:14:18.406 "r_mbytes_per_sec": 0, 00:14:18.406 "w_mbytes_per_sec": 0 00:14:18.406 }, 00:14:18.406 "claimed": true, 00:14:18.406 "claim_type": "exclusive_write", 00:14:18.406 "zoned": false, 00:14:18.406 "supported_io_types": { 00:14:18.406 "read": true, 00:14:18.406 "write": true, 00:14:18.406 "unmap": true, 00:14:18.406 "flush": true, 00:14:18.406 "reset": true, 00:14:18.406 "nvme_admin": false, 00:14:18.406 "nvme_io": false, 00:14:18.406 "nvme_io_md": false, 00:14:18.406 "write_zeroes": true, 00:14:18.406 "zcopy": true, 00:14:18.406 "get_zone_info": false, 00:14:18.406 "zone_management": false, 00:14:18.406 "zone_append": false, 00:14:18.406 "compare": false, 00:14:18.406 "compare_and_write": false, 00:14:18.406 "abort": true, 00:14:18.406 "seek_hole": false, 00:14:18.406 "seek_data": false, 00:14:18.406 "copy": true, 00:14:18.406 "nvme_iov_md": false 00:14:18.406 }, 00:14:18.406 "memory_domains": [ 00:14:18.406 { 00:14:18.406 "dma_device_id": "system", 00:14:18.406 "dma_device_type": 1 00:14:18.406 }, 00:14:18.406 { 00:14:18.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.406 "dma_device_type": 2 00:14:18.406 } 00:14:18.406 ], 00:14:18.406 "driver_specific": {} 00:14:18.406 } 00:14:18.406 ] 00:14:18.406 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.406 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:18.406 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:18.406 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.406 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.406 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:18.406 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.406 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.406 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.406 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.406 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.406 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.406 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.406 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.406 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.406 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.406 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.666 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.666 "name": "Existed_Raid", 00:14:18.666 "uuid": "36369b69-a6b6-42be-a3e9-af7085ff2f6c", 00:14:18.666 "strip_size_kb": 64, 00:14:18.666 "state": "configuring", 00:14:18.666 "raid_level": "raid5f", 00:14:18.666 "superblock": true, 00:14:18.666 "num_base_bdevs": 4, 00:14:18.666 "num_base_bdevs_discovered": 1, 00:14:18.666 "num_base_bdevs_operational": 4, 00:14:18.666 "base_bdevs_list": [ 00:14:18.666 { 00:14:18.666 "name": "BaseBdev1", 00:14:18.666 "uuid": "f4882b1e-5ef1-404a-9170-2129401f4599", 00:14:18.666 "is_configured": true, 00:14:18.666 "data_offset": 2048, 00:14:18.666 "data_size": 63488 00:14:18.666 }, 00:14:18.666 { 00:14:18.666 "name": "BaseBdev2", 00:14:18.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.666 "is_configured": false, 00:14:18.666 "data_offset": 0, 00:14:18.666 "data_size": 0 00:14:18.666 }, 00:14:18.666 { 00:14:18.666 "name": "BaseBdev3", 00:14:18.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.666 "is_configured": false, 00:14:18.666 "data_offset": 0, 00:14:18.666 "data_size": 0 00:14:18.666 }, 00:14:18.666 { 00:14:18.666 "name": "BaseBdev4", 00:14:18.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.666 "is_configured": false, 00:14:18.666 "data_offset": 0, 00:14:18.666 "data_size": 0 00:14:18.666 } 00:14:18.666 ] 00:14:18.666 }' 00:14:18.666 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.666 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.926 [2024-10-15 01:15:31.554015] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:18.926 [2024-10-15 01:15:31.554070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.926 [2024-10-15 01:15:31.566052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:18.926 [2024-10-15 01:15:31.567913] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:18.926 [2024-10-15 01:15:31.567955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:18.926 [2024-10-15 01:15:31.567973] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:18.926 [2024-10-15 01:15:31.567982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:18.926 [2024-10-15 01:15:31.567988] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:18.926 [2024-10-15 01:15:31.567996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.926 "name": "Existed_Raid", 00:14:18.926 "uuid": "0cf0552a-80f0-47a6-853b-82a965e49676", 00:14:18.926 "strip_size_kb": 64, 00:14:18.926 "state": "configuring", 00:14:18.926 "raid_level": "raid5f", 00:14:18.926 "superblock": true, 00:14:18.926 "num_base_bdevs": 4, 00:14:18.926 "num_base_bdevs_discovered": 1, 00:14:18.926 "num_base_bdevs_operational": 4, 00:14:18.926 "base_bdevs_list": [ 00:14:18.926 { 00:14:18.926 "name": "BaseBdev1", 00:14:18.926 "uuid": "f4882b1e-5ef1-404a-9170-2129401f4599", 00:14:18.926 "is_configured": true, 00:14:18.926 "data_offset": 2048, 00:14:18.926 "data_size": 63488 00:14:18.926 }, 00:14:18.926 { 00:14:18.926 "name": "BaseBdev2", 00:14:18.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.926 "is_configured": false, 00:14:18.926 "data_offset": 0, 00:14:18.926 "data_size": 0 00:14:18.926 }, 00:14:18.926 { 00:14:18.926 "name": "BaseBdev3", 00:14:18.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.926 "is_configured": false, 00:14:18.926 "data_offset": 0, 00:14:18.926 "data_size": 0 00:14:18.926 }, 00:14:18.926 { 00:14:18.926 "name": "BaseBdev4", 00:14:18.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.926 "is_configured": false, 00:14:18.926 "data_offset": 0, 00:14:18.926 "data_size": 0 00:14:18.926 } 00:14:18.926 ] 00:14:18.926 }' 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.926 01:15:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.496 [2024-10-15 01:15:32.016213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.496 BaseBdev2 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.496 [ 00:14:19.496 { 00:14:19.496 "name": "BaseBdev2", 00:14:19.496 "aliases": [ 00:14:19.496 "222e1c61-4fe9-493e-97b7-00418b3da66b" 00:14:19.496 ], 00:14:19.496 "product_name": "Malloc disk", 00:14:19.496 "block_size": 512, 00:14:19.496 "num_blocks": 65536, 00:14:19.496 "uuid": "222e1c61-4fe9-493e-97b7-00418b3da66b", 00:14:19.496 "assigned_rate_limits": { 00:14:19.496 "rw_ios_per_sec": 0, 00:14:19.496 "rw_mbytes_per_sec": 0, 00:14:19.496 "r_mbytes_per_sec": 0, 00:14:19.496 "w_mbytes_per_sec": 0 00:14:19.496 }, 00:14:19.496 "claimed": true, 00:14:19.496 "claim_type": "exclusive_write", 00:14:19.496 "zoned": false, 00:14:19.496 "supported_io_types": { 00:14:19.496 "read": true, 00:14:19.496 "write": true, 00:14:19.496 "unmap": true, 00:14:19.496 "flush": true, 00:14:19.496 "reset": true, 00:14:19.496 "nvme_admin": false, 00:14:19.496 "nvme_io": false, 00:14:19.496 "nvme_io_md": false, 00:14:19.496 "write_zeroes": true, 00:14:19.496 "zcopy": true, 00:14:19.496 "get_zone_info": false, 00:14:19.496 "zone_management": false, 00:14:19.496 "zone_append": false, 00:14:19.496 "compare": false, 00:14:19.496 "compare_and_write": false, 00:14:19.496 "abort": true, 00:14:19.496 "seek_hole": false, 00:14:19.496 "seek_data": false, 00:14:19.496 "copy": true, 00:14:19.496 "nvme_iov_md": false 00:14:19.496 }, 00:14:19.496 "memory_domains": [ 00:14:19.496 { 00:14:19.496 "dma_device_id": "system", 00:14:19.496 "dma_device_type": 1 00:14:19.496 }, 00:14:19.496 { 00:14:19.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.496 "dma_device_type": 2 00:14:19.496 } 00:14:19.496 ], 00:14:19.496 "driver_specific": {} 00:14:19.496 } 00:14:19.496 ] 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.496 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.496 "name": "Existed_Raid", 00:14:19.496 "uuid": "0cf0552a-80f0-47a6-853b-82a965e49676", 00:14:19.496 "strip_size_kb": 64, 00:14:19.496 "state": "configuring", 00:14:19.496 "raid_level": "raid5f", 00:14:19.496 "superblock": true, 00:14:19.496 "num_base_bdevs": 4, 00:14:19.496 "num_base_bdevs_discovered": 2, 00:14:19.496 "num_base_bdevs_operational": 4, 00:14:19.496 "base_bdevs_list": [ 00:14:19.496 { 00:14:19.496 "name": "BaseBdev1", 00:14:19.496 "uuid": "f4882b1e-5ef1-404a-9170-2129401f4599", 00:14:19.496 "is_configured": true, 00:14:19.497 "data_offset": 2048, 00:14:19.497 "data_size": 63488 00:14:19.497 }, 00:14:19.497 { 00:14:19.497 "name": "BaseBdev2", 00:14:19.497 "uuid": "222e1c61-4fe9-493e-97b7-00418b3da66b", 00:14:19.497 "is_configured": true, 00:14:19.497 "data_offset": 2048, 00:14:19.497 "data_size": 63488 00:14:19.497 }, 00:14:19.497 { 00:14:19.497 "name": "BaseBdev3", 00:14:19.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.497 "is_configured": false, 00:14:19.497 "data_offset": 0, 00:14:19.497 "data_size": 0 00:14:19.497 }, 00:14:19.497 { 00:14:19.497 "name": "BaseBdev4", 00:14:19.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.497 "is_configured": false, 00:14:19.497 "data_offset": 0, 00:14:19.497 "data_size": 0 00:14:19.497 } 00:14:19.497 ] 00:14:19.497 }' 00:14:19.497 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.497 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.760 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:19.760 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.760 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.035 [2024-10-15 01:15:32.491766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:20.035 BaseBdev3 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.035 [ 00:14:20.035 { 00:14:20.035 "name": "BaseBdev3", 00:14:20.035 "aliases": [ 00:14:20.035 "46ef18d1-2cb5-41de-8457-28801b691e2f" 00:14:20.035 ], 00:14:20.035 "product_name": "Malloc disk", 00:14:20.035 "block_size": 512, 00:14:20.035 "num_blocks": 65536, 00:14:20.035 "uuid": "46ef18d1-2cb5-41de-8457-28801b691e2f", 00:14:20.035 "assigned_rate_limits": { 00:14:20.035 "rw_ios_per_sec": 0, 00:14:20.035 "rw_mbytes_per_sec": 0, 00:14:20.035 "r_mbytes_per_sec": 0, 00:14:20.035 "w_mbytes_per_sec": 0 00:14:20.035 }, 00:14:20.035 "claimed": true, 00:14:20.035 "claim_type": "exclusive_write", 00:14:20.035 "zoned": false, 00:14:20.035 "supported_io_types": { 00:14:20.035 "read": true, 00:14:20.035 "write": true, 00:14:20.035 "unmap": true, 00:14:20.035 "flush": true, 00:14:20.035 "reset": true, 00:14:20.035 "nvme_admin": false, 00:14:20.035 "nvme_io": false, 00:14:20.035 "nvme_io_md": false, 00:14:20.035 "write_zeroes": true, 00:14:20.035 "zcopy": true, 00:14:20.035 "get_zone_info": false, 00:14:20.035 "zone_management": false, 00:14:20.035 "zone_append": false, 00:14:20.035 "compare": false, 00:14:20.035 "compare_and_write": false, 00:14:20.035 "abort": true, 00:14:20.035 "seek_hole": false, 00:14:20.035 "seek_data": false, 00:14:20.035 "copy": true, 00:14:20.035 "nvme_iov_md": false 00:14:20.035 }, 00:14:20.035 "memory_domains": [ 00:14:20.035 { 00:14:20.035 "dma_device_id": "system", 00:14:20.035 "dma_device_type": 1 00:14:20.035 }, 00:14:20.035 { 00:14:20.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.035 "dma_device_type": 2 00:14:20.035 } 00:14:20.035 ], 00:14:20.035 "driver_specific": {} 00:14:20.035 } 00:14:20.035 ] 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.035 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.035 "name": "Existed_Raid", 00:14:20.035 "uuid": "0cf0552a-80f0-47a6-853b-82a965e49676", 00:14:20.035 "strip_size_kb": 64, 00:14:20.035 "state": "configuring", 00:14:20.035 "raid_level": "raid5f", 00:14:20.035 "superblock": true, 00:14:20.035 "num_base_bdevs": 4, 00:14:20.035 "num_base_bdevs_discovered": 3, 00:14:20.036 "num_base_bdevs_operational": 4, 00:14:20.036 "base_bdevs_list": [ 00:14:20.036 { 00:14:20.036 "name": "BaseBdev1", 00:14:20.036 "uuid": "f4882b1e-5ef1-404a-9170-2129401f4599", 00:14:20.036 "is_configured": true, 00:14:20.036 "data_offset": 2048, 00:14:20.036 "data_size": 63488 00:14:20.036 }, 00:14:20.036 { 00:14:20.036 "name": "BaseBdev2", 00:14:20.036 "uuid": "222e1c61-4fe9-493e-97b7-00418b3da66b", 00:14:20.036 "is_configured": true, 00:14:20.036 "data_offset": 2048, 00:14:20.036 "data_size": 63488 00:14:20.036 }, 00:14:20.036 { 00:14:20.036 "name": "BaseBdev3", 00:14:20.036 "uuid": "46ef18d1-2cb5-41de-8457-28801b691e2f", 00:14:20.036 "is_configured": true, 00:14:20.036 "data_offset": 2048, 00:14:20.036 "data_size": 63488 00:14:20.036 }, 00:14:20.036 { 00:14:20.036 "name": "BaseBdev4", 00:14:20.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.036 "is_configured": false, 00:14:20.036 "data_offset": 0, 00:14:20.036 "data_size": 0 00:14:20.036 } 00:14:20.036 ] 00:14:20.036 }' 00:14:20.036 01:15:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.036 01:15:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.313 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:20.313 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.313 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.313 [2024-10-15 01:15:33.022000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:20.314 [2024-10-15 01:15:33.022321] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:20.314 [2024-10-15 01:15:33.022380] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:20.314 BaseBdev4 00:14:20.314 [2024-10-15 01:15:33.022673] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:20.314 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.314 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:20.314 [2024-10-15 01:15:33.023168] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:20.314 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:20.314 [2024-10-15 01:15:33.023232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:14:20.314 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:20.314 [2024-10-15 01:15:33.023362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.314 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:20.314 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:20.314 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:20.314 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:20.314 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.314 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.314 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.314 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:20.314 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.314 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.574 [ 00:14:20.574 { 00:14:20.574 "name": "BaseBdev4", 00:14:20.574 "aliases": [ 00:14:20.574 "fb319696-bd44-4af6-aa52-be4371106028" 00:14:20.574 ], 00:14:20.574 "product_name": "Malloc disk", 00:14:20.574 "block_size": 512, 00:14:20.574 "num_blocks": 65536, 00:14:20.574 "uuid": "fb319696-bd44-4af6-aa52-be4371106028", 00:14:20.574 "assigned_rate_limits": { 00:14:20.574 "rw_ios_per_sec": 0, 00:14:20.574 "rw_mbytes_per_sec": 0, 00:14:20.574 "r_mbytes_per_sec": 0, 00:14:20.574 "w_mbytes_per_sec": 0 00:14:20.574 }, 00:14:20.574 "claimed": true, 00:14:20.574 "claim_type": "exclusive_write", 00:14:20.574 "zoned": false, 00:14:20.574 "supported_io_types": { 00:14:20.574 "read": true, 00:14:20.574 "write": true, 00:14:20.574 "unmap": true, 00:14:20.574 "flush": true, 00:14:20.574 "reset": true, 00:14:20.574 "nvme_admin": false, 00:14:20.574 "nvme_io": false, 00:14:20.574 "nvme_io_md": false, 00:14:20.574 "write_zeroes": true, 00:14:20.574 "zcopy": true, 00:14:20.574 "get_zone_info": false, 00:14:20.574 "zone_management": false, 00:14:20.574 "zone_append": false, 00:14:20.574 "compare": false, 00:14:20.574 "compare_and_write": false, 00:14:20.574 "abort": true, 00:14:20.574 "seek_hole": false, 00:14:20.574 "seek_data": false, 00:14:20.574 "copy": true, 00:14:20.574 "nvme_iov_md": false 00:14:20.574 }, 00:14:20.574 "memory_domains": [ 00:14:20.574 { 00:14:20.574 "dma_device_id": "system", 00:14:20.574 "dma_device_type": 1 00:14:20.574 }, 00:14:20.574 { 00:14:20.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.574 "dma_device_type": 2 00:14:20.574 } 00:14:20.574 ], 00:14:20.574 "driver_specific": {} 00:14:20.574 } 00:14:20.574 ] 00:14:20.574 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.574 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:20.574 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:20.574 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:20.574 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:20.574 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.574 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.574 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.574 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.574 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.574 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.574 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.574 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.574 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.574 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.574 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.574 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.574 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.574 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.574 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.574 "name": "Existed_Raid", 00:14:20.574 "uuid": "0cf0552a-80f0-47a6-853b-82a965e49676", 00:14:20.574 "strip_size_kb": 64, 00:14:20.574 "state": "online", 00:14:20.574 "raid_level": "raid5f", 00:14:20.574 "superblock": true, 00:14:20.574 "num_base_bdevs": 4, 00:14:20.574 "num_base_bdevs_discovered": 4, 00:14:20.574 "num_base_bdevs_operational": 4, 00:14:20.574 "base_bdevs_list": [ 00:14:20.574 { 00:14:20.574 "name": "BaseBdev1", 00:14:20.574 "uuid": "f4882b1e-5ef1-404a-9170-2129401f4599", 00:14:20.574 "is_configured": true, 00:14:20.574 "data_offset": 2048, 00:14:20.574 "data_size": 63488 00:14:20.574 }, 00:14:20.574 { 00:14:20.574 "name": "BaseBdev2", 00:14:20.574 "uuid": "222e1c61-4fe9-493e-97b7-00418b3da66b", 00:14:20.574 "is_configured": true, 00:14:20.574 "data_offset": 2048, 00:14:20.574 "data_size": 63488 00:14:20.574 }, 00:14:20.574 { 00:14:20.574 "name": "BaseBdev3", 00:14:20.574 "uuid": "46ef18d1-2cb5-41de-8457-28801b691e2f", 00:14:20.574 "is_configured": true, 00:14:20.574 "data_offset": 2048, 00:14:20.574 "data_size": 63488 00:14:20.574 }, 00:14:20.574 { 00:14:20.574 "name": "BaseBdev4", 00:14:20.574 "uuid": "fb319696-bd44-4af6-aa52-be4371106028", 00:14:20.574 "is_configured": true, 00:14:20.574 "data_offset": 2048, 00:14:20.574 "data_size": 63488 00:14:20.574 } 00:14:20.574 ] 00:14:20.574 }' 00:14:20.575 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.575 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.835 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:20.835 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:20.835 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:20.835 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:20.835 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:20.835 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:20.835 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:20.835 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:20.835 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.835 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.835 [2024-10-15 01:15:33.481526] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.835 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.835 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:20.835 "name": "Existed_Raid", 00:14:20.835 "aliases": [ 00:14:20.835 "0cf0552a-80f0-47a6-853b-82a965e49676" 00:14:20.835 ], 00:14:20.835 "product_name": "Raid Volume", 00:14:20.835 "block_size": 512, 00:14:20.835 "num_blocks": 190464, 00:14:20.835 "uuid": "0cf0552a-80f0-47a6-853b-82a965e49676", 00:14:20.835 "assigned_rate_limits": { 00:14:20.835 "rw_ios_per_sec": 0, 00:14:20.835 "rw_mbytes_per_sec": 0, 00:14:20.835 "r_mbytes_per_sec": 0, 00:14:20.835 "w_mbytes_per_sec": 0 00:14:20.835 }, 00:14:20.835 "claimed": false, 00:14:20.835 "zoned": false, 00:14:20.835 "supported_io_types": { 00:14:20.835 "read": true, 00:14:20.835 "write": true, 00:14:20.835 "unmap": false, 00:14:20.835 "flush": false, 00:14:20.835 "reset": true, 00:14:20.835 "nvme_admin": false, 00:14:20.835 "nvme_io": false, 00:14:20.835 "nvme_io_md": false, 00:14:20.835 "write_zeroes": true, 00:14:20.835 "zcopy": false, 00:14:20.835 "get_zone_info": false, 00:14:20.835 "zone_management": false, 00:14:20.835 "zone_append": false, 00:14:20.835 "compare": false, 00:14:20.835 "compare_and_write": false, 00:14:20.835 "abort": false, 00:14:20.835 "seek_hole": false, 00:14:20.835 "seek_data": false, 00:14:20.835 "copy": false, 00:14:20.835 "nvme_iov_md": false 00:14:20.835 }, 00:14:20.835 "driver_specific": { 00:14:20.835 "raid": { 00:14:20.835 "uuid": "0cf0552a-80f0-47a6-853b-82a965e49676", 00:14:20.835 "strip_size_kb": 64, 00:14:20.835 "state": "online", 00:14:20.835 "raid_level": "raid5f", 00:14:20.835 "superblock": true, 00:14:20.835 "num_base_bdevs": 4, 00:14:20.835 "num_base_bdevs_discovered": 4, 00:14:20.835 "num_base_bdevs_operational": 4, 00:14:20.835 "base_bdevs_list": [ 00:14:20.835 { 00:14:20.835 "name": "BaseBdev1", 00:14:20.835 "uuid": "f4882b1e-5ef1-404a-9170-2129401f4599", 00:14:20.835 "is_configured": true, 00:14:20.835 "data_offset": 2048, 00:14:20.835 "data_size": 63488 00:14:20.835 }, 00:14:20.835 { 00:14:20.835 "name": "BaseBdev2", 00:14:20.835 "uuid": "222e1c61-4fe9-493e-97b7-00418b3da66b", 00:14:20.835 "is_configured": true, 00:14:20.835 "data_offset": 2048, 00:14:20.835 "data_size": 63488 00:14:20.835 }, 00:14:20.835 { 00:14:20.835 "name": "BaseBdev3", 00:14:20.835 "uuid": "46ef18d1-2cb5-41de-8457-28801b691e2f", 00:14:20.835 "is_configured": true, 00:14:20.835 "data_offset": 2048, 00:14:20.835 "data_size": 63488 00:14:20.835 }, 00:14:20.835 { 00:14:20.835 "name": "BaseBdev4", 00:14:20.835 "uuid": "fb319696-bd44-4af6-aa52-be4371106028", 00:14:20.835 "is_configured": true, 00:14:20.835 "data_offset": 2048, 00:14:20.835 "data_size": 63488 00:14:20.835 } 00:14:20.835 ] 00:14:20.835 } 00:14:20.835 } 00:14:20.835 }' 00:14:20.835 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:21.095 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:21.095 BaseBdev2 00:14:21.095 BaseBdev3 00:14:21.095 BaseBdev4' 00:14:21.095 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.095 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.096 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.356 [2024-10-15 01:15:33.820803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.356 "name": "Existed_Raid", 00:14:21.356 "uuid": "0cf0552a-80f0-47a6-853b-82a965e49676", 00:14:21.356 "strip_size_kb": 64, 00:14:21.356 "state": "online", 00:14:21.356 "raid_level": "raid5f", 00:14:21.356 "superblock": true, 00:14:21.356 "num_base_bdevs": 4, 00:14:21.356 "num_base_bdevs_discovered": 3, 00:14:21.356 "num_base_bdevs_operational": 3, 00:14:21.356 "base_bdevs_list": [ 00:14:21.356 { 00:14:21.356 "name": null, 00:14:21.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.356 "is_configured": false, 00:14:21.356 "data_offset": 0, 00:14:21.356 "data_size": 63488 00:14:21.356 }, 00:14:21.356 { 00:14:21.356 "name": "BaseBdev2", 00:14:21.356 "uuid": "222e1c61-4fe9-493e-97b7-00418b3da66b", 00:14:21.356 "is_configured": true, 00:14:21.356 "data_offset": 2048, 00:14:21.356 "data_size": 63488 00:14:21.356 }, 00:14:21.356 { 00:14:21.356 "name": "BaseBdev3", 00:14:21.356 "uuid": "46ef18d1-2cb5-41de-8457-28801b691e2f", 00:14:21.356 "is_configured": true, 00:14:21.356 "data_offset": 2048, 00:14:21.356 "data_size": 63488 00:14:21.356 }, 00:14:21.356 { 00:14:21.356 "name": "BaseBdev4", 00:14:21.356 "uuid": "fb319696-bd44-4af6-aa52-be4371106028", 00:14:21.356 "is_configured": true, 00:14:21.356 "data_offset": 2048, 00:14:21.356 "data_size": 63488 00:14:21.356 } 00:14:21.356 ] 00:14:21.356 }' 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.356 01:15:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.616 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:21.616 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:21.616 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:21.616 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.616 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.616 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.616 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.616 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:21.616 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:21.616 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:21.616 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.616 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.616 [2024-10-15 01:15:34.295716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:21.616 [2024-10-15 01:15:34.295950] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:21.616 [2024-10-15 01:15:34.307236] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:21.616 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.616 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:21.616 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:21.616 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:21.616 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.616 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.616 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.616 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.877 [2024-10-15 01:15:34.355253] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.877 [2024-10-15 01:15:34.422514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:21.877 [2024-10-15 01:15:34.422622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.877 BaseBdev2 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.877 [ 00:14:21.877 { 00:14:21.877 "name": "BaseBdev2", 00:14:21.877 "aliases": [ 00:14:21.877 "a7ffb38a-a363-4b41-a42f-4992dda48033" 00:14:21.877 ], 00:14:21.877 "product_name": "Malloc disk", 00:14:21.877 "block_size": 512, 00:14:21.877 "num_blocks": 65536, 00:14:21.877 "uuid": "a7ffb38a-a363-4b41-a42f-4992dda48033", 00:14:21.877 "assigned_rate_limits": { 00:14:21.877 "rw_ios_per_sec": 0, 00:14:21.877 "rw_mbytes_per_sec": 0, 00:14:21.877 "r_mbytes_per_sec": 0, 00:14:21.877 "w_mbytes_per_sec": 0 00:14:21.877 }, 00:14:21.877 "claimed": false, 00:14:21.877 "zoned": false, 00:14:21.877 "supported_io_types": { 00:14:21.877 "read": true, 00:14:21.877 "write": true, 00:14:21.877 "unmap": true, 00:14:21.877 "flush": true, 00:14:21.877 "reset": true, 00:14:21.877 "nvme_admin": false, 00:14:21.877 "nvme_io": false, 00:14:21.877 "nvme_io_md": false, 00:14:21.877 "write_zeroes": true, 00:14:21.877 "zcopy": true, 00:14:21.877 "get_zone_info": false, 00:14:21.877 "zone_management": false, 00:14:21.877 "zone_append": false, 00:14:21.877 "compare": false, 00:14:21.877 "compare_and_write": false, 00:14:21.877 "abort": true, 00:14:21.877 "seek_hole": false, 00:14:21.877 "seek_data": false, 00:14:21.877 "copy": true, 00:14:21.877 "nvme_iov_md": false 00:14:21.877 }, 00:14:21.877 "memory_domains": [ 00:14:21.877 { 00:14:21.877 "dma_device_id": "system", 00:14:21.877 "dma_device_type": 1 00:14:21.877 }, 00:14:21.877 { 00:14:21.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.877 "dma_device_type": 2 00:14:21.877 } 00:14:21.877 ], 00:14:21.877 "driver_specific": {} 00:14:21.877 } 00:14:21.877 ] 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.877 BaseBdev3 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:21.877 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:21.878 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:21.878 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:21.878 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.878 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.878 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.878 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:21.878 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.878 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.878 [ 00:14:21.878 { 00:14:21.878 "name": "BaseBdev3", 00:14:21.878 "aliases": [ 00:14:21.878 "f8649a80-f981-412a-b9ed-31c07687c5f1" 00:14:21.878 ], 00:14:21.878 "product_name": "Malloc disk", 00:14:21.878 "block_size": 512, 00:14:21.878 "num_blocks": 65536, 00:14:21.878 "uuid": "f8649a80-f981-412a-b9ed-31c07687c5f1", 00:14:21.878 "assigned_rate_limits": { 00:14:21.878 "rw_ios_per_sec": 0, 00:14:21.878 "rw_mbytes_per_sec": 0, 00:14:21.878 "r_mbytes_per_sec": 0, 00:14:21.878 "w_mbytes_per_sec": 0 00:14:21.878 }, 00:14:21.878 "claimed": false, 00:14:21.878 "zoned": false, 00:14:21.878 "supported_io_types": { 00:14:21.878 "read": true, 00:14:21.878 "write": true, 00:14:21.878 "unmap": true, 00:14:21.878 "flush": true, 00:14:21.878 "reset": true, 00:14:21.878 "nvme_admin": false, 00:14:21.878 "nvme_io": false, 00:14:21.878 "nvme_io_md": false, 00:14:21.878 "write_zeroes": true, 00:14:21.878 "zcopy": true, 00:14:21.878 "get_zone_info": false, 00:14:21.878 "zone_management": false, 00:14:21.878 "zone_append": false, 00:14:21.878 "compare": false, 00:14:21.878 "compare_and_write": false, 00:14:21.878 "abort": true, 00:14:21.878 "seek_hole": false, 00:14:21.878 "seek_data": false, 00:14:21.878 "copy": true, 00:14:21.878 "nvme_iov_md": false 00:14:21.878 }, 00:14:21.878 "memory_domains": [ 00:14:21.878 { 00:14:21.878 "dma_device_id": "system", 00:14:21.878 "dma_device_type": 1 00:14:21.878 }, 00:14:21.878 { 00:14:21.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.878 "dma_device_type": 2 00:14:21.878 } 00:14:21.878 ], 00:14:21.878 "driver_specific": {} 00:14:21.878 } 00:14:21.878 ] 00:14:21.878 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.878 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:21.878 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:21.878 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:21.878 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:21.878 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.878 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.138 BaseBdev4 00:14:22.138 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.138 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:22.138 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:22.138 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:22.138 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:22.138 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:22.138 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:22.138 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:22.138 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.138 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.138 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.139 [ 00:14:22.139 { 00:14:22.139 "name": "BaseBdev4", 00:14:22.139 "aliases": [ 00:14:22.139 "b7b0d710-e3b6-4075-9046-f270145cde55" 00:14:22.139 ], 00:14:22.139 "product_name": "Malloc disk", 00:14:22.139 "block_size": 512, 00:14:22.139 "num_blocks": 65536, 00:14:22.139 "uuid": "b7b0d710-e3b6-4075-9046-f270145cde55", 00:14:22.139 "assigned_rate_limits": { 00:14:22.139 "rw_ios_per_sec": 0, 00:14:22.139 "rw_mbytes_per_sec": 0, 00:14:22.139 "r_mbytes_per_sec": 0, 00:14:22.139 "w_mbytes_per_sec": 0 00:14:22.139 }, 00:14:22.139 "claimed": false, 00:14:22.139 "zoned": false, 00:14:22.139 "supported_io_types": { 00:14:22.139 "read": true, 00:14:22.139 "write": true, 00:14:22.139 "unmap": true, 00:14:22.139 "flush": true, 00:14:22.139 "reset": true, 00:14:22.139 "nvme_admin": false, 00:14:22.139 "nvme_io": false, 00:14:22.139 "nvme_io_md": false, 00:14:22.139 "write_zeroes": true, 00:14:22.139 "zcopy": true, 00:14:22.139 "get_zone_info": false, 00:14:22.139 "zone_management": false, 00:14:22.139 "zone_append": false, 00:14:22.139 "compare": false, 00:14:22.139 "compare_and_write": false, 00:14:22.139 "abort": true, 00:14:22.139 "seek_hole": false, 00:14:22.139 "seek_data": false, 00:14:22.139 "copy": true, 00:14:22.139 "nvme_iov_md": false 00:14:22.139 }, 00:14:22.139 "memory_domains": [ 00:14:22.139 { 00:14:22.139 "dma_device_id": "system", 00:14:22.139 "dma_device_type": 1 00:14:22.139 }, 00:14:22.139 { 00:14:22.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.139 "dma_device_type": 2 00:14:22.139 } 00:14:22.139 ], 00:14:22.139 "driver_specific": {} 00:14:22.139 } 00:14:22.139 ] 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.139 [2024-10-15 01:15:34.652294] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:22.139 [2024-10-15 01:15:34.652341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:22.139 [2024-10-15 01:15:34.652366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.139 [2024-10-15 01:15:34.654259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:22.139 [2024-10-15 01:15:34.654307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.139 "name": "Existed_Raid", 00:14:22.139 "uuid": "747c1786-0afc-44a1-b7b8-e605ec55be54", 00:14:22.139 "strip_size_kb": 64, 00:14:22.139 "state": "configuring", 00:14:22.139 "raid_level": "raid5f", 00:14:22.139 "superblock": true, 00:14:22.139 "num_base_bdevs": 4, 00:14:22.139 "num_base_bdevs_discovered": 3, 00:14:22.139 "num_base_bdevs_operational": 4, 00:14:22.139 "base_bdevs_list": [ 00:14:22.139 { 00:14:22.139 "name": "BaseBdev1", 00:14:22.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.139 "is_configured": false, 00:14:22.139 "data_offset": 0, 00:14:22.139 "data_size": 0 00:14:22.139 }, 00:14:22.139 { 00:14:22.139 "name": "BaseBdev2", 00:14:22.139 "uuid": "a7ffb38a-a363-4b41-a42f-4992dda48033", 00:14:22.139 "is_configured": true, 00:14:22.139 "data_offset": 2048, 00:14:22.139 "data_size": 63488 00:14:22.139 }, 00:14:22.139 { 00:14:22.139 "name": "BaseBdev3", 00:14:22.139 "uuid": "f8649a80-f981-412a-b9ed-31c07687c5f1", 00:14:22.139 "is_configured": true, 00:14:22.139 "data_offset": 2048, 00:14:22.139 "data_size": 63488 00:14:22.139 }, 00:14:22.139 { 00:14:22.139 "name": "BaseBdev4", 00:14:22.139 "uuid": "b7b0d710-e3b6-4075-9046-f270145cde55", 00:14:22.139 "is_configured": true, 00:14:22.139 "data_offset": 2048, 00:14:22.139 "data_size": 63488 00:14:22.139 } 00:14:22.139 ] 00:14:22.139 }' 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.139 01:15:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.399 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:22.399 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.399 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.399 [2024-10-15 01:15:35.115671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:22.399 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.399 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:22.399 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.399 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.399 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.399 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.399 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.659 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.659 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.659 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.659 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.659 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.659 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.659 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.659 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.659 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.659 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.659 "name": "Existed_Raid", 00:14:22.659 "uuid": "747c1786-0afc-44a1-b7b8-e605ec55be54", 00:14:22.659 "strip_size_kb": 64, 00:14:22.659 "state": "configuring", 00:14:22.659 "raid_level": "raid5f", 00:14:22.659 "superblock": true, 00:14:22.659 "num_base_bdevs": 4, 00:14:22.659 "num_base_bdevs_discovered": 2, 00:14:22.659 "num_base_bdevs_operational": 4, 00:14:22.659 "base_bdevs_list": [ 00:14:22.659 { 00:14:22.659 "name": "BaseBdev1", 00:14:22.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.659 "is_configured": false, 00:14:22.659 "data_offset": 0, 00:14:22.659 "data_size": 0 00:14:22.659 }, 00:14:22.659 { 00:14:22.659 "name": null, 00:14:22.659 "uuid": "a7ffb38a-a363-4b41-a42f-4992dda48033", 00:14:22.659 "is_configured": false, 00:14:22.659 "data_offset": 0, 00:14:22.659 "data_size": 63488 00:14:22.659 }, 00:14:22.659 { 00:14:22.659 "name": "BaseBdev3", 00:14:22.659 "uuid": "f8649a80-f981-412a-b9ed-31c07687c5f1", 00:14:22.659 "is_configured": true, 00:14:22.659 "data_offset": 2048, 00:14:22.659 "data_size": 63488 00:14:22.659 }, 00:14:22.659 { 00:14:22.659 "name": "BaseBdev4", 00:14:22.659 "uuid": "b7b0d710-e3b6-4075-9046-f270145cde55", 00:14:22.659 "is_configured": true, 00:14:22.659 "data_offset": 2048, 00:14:22.659 "data_size": 63488 00:14:22.659 } 00:14:22.659 ] 00:14:22.659 }' 00:14:22.659 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.659 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.919 [2024-10-15 01:15:35.589819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:22.919 BaseBdev1 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.919 [ 00:14:22.919 { 00:14:22.919 "name": "BaseBdev1", 00:14:22.919 "aliases": [ 00:14:22.919 "5e7e3ee5-4d9b-4ad5-adcd-fbc1089e0707" 00:14:22.919 ], 00:14:22.919 "product_name": "Malloc disk", 00:14:22.919 "block_size": 512, 00:14:22.919 "num_blocks": 65536, 00:14:22.919 "uuid": "5e7e3ee5-4d9b-4ad5-adcd-fbc1089e0707", 00:14:22.919 "assigned_rate_limits": { 00:14:22.919 "rw_ios_per_sec": 0, 00:14:22.919 "rw_mbytes_per_sec": 0, 00:14:22.919 "r_mbytes_per_sec": 0, 00:14:22.919 "w_mbytes_per_sec": 0 00:14:22.919 }, 00:14:22.919 "claimed": true, 00:14:22.919 "claim_type": "exclusive_write", 00:14:22.919 "zoned": false, 00:14:22.919 "supported_io_types": { 00:14:22.919 "read": true, 00:14:22.919 "write": true, 00:14:22.919 "unmap": true, 00:14:22.919 "flush": true, 00:14:22.919 "reset": true, 00:14:22.919 "nvme_admin": false, 00:14:22.919 "nvme_io": false, 00:14:22.919 "nvme_io_md": false, 00:14:22.919 "write_zeroes": true, 00:14:22.919 "zcopy": true, 00:14:22.919 "get_zone_info": false, 00:14:22.919 "zone_management": false, 00:14:22.919 "zone_append": false, 00:14:22.919 "compare": false, 00:14:22.919 "compare_and_write": false, 00:14:22.919 "abort": true, 00:14:22.919 "seek_hole": false, 00:14:22.919 "seek_data": false, 00:14:22.919 "copy": true, 00:14:22.919 "nvme_iov_md": false 00:14:22.919 }, 00:14:22.919 "memory_domains": [ 00:14:22.919 { 00:14:22.919 "dma_device_id": "system", 00:14:22.919 "dma_device_type": 1 00:14:22.919 }, 00:14:22.919 { 00:14:22.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.919 "dma_device_type": 2 00:14:22.919 } 00:14:22.919 ], 00:14:22.919 "driver_specific": {} 00:14:22.919 } 00:14:22.919 ] 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.919 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.179 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.179 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.179 "name": "Existed_Raid", 00:14:23.179 "uuid": "747c1786-0afc-44a1-b7b8-e605ec55be54", 00:14:23.179 "strip_size_kb": 64, 00:14:23.179 "state": "configuring", 00:14:23.179 "raid_level": "raid5f", 00:14:23.179 "superblock": true, 00:14:23.179 "num_base_bdevs": 4, 00:14:23.179 "num_base_bdevs_discovered": 3, 00:14:23.179 "num_base_bdevs_operational": 4, 00:14:23.179 "base_bdevs_list": [ 00:14:23.179 { 00:14:23.179 "name": "BaseBdev1", 00:14:23.179 "uuid": "5e7e3ee5-4d9b-4ad5-adcd-fbc1089e0707", 00:14:23.179 "is_configured": true, 00:14:23.179 "data_offset": 2048, 00:14:23.179 "data_size": 63488 00:14:23.179 }, 00:14:23.179 { 00:14:23.179 "name": null, 00:14:23.179 "uuid": "a7ffb38a-a363-4b41-a42f-4992dda48033", 00:14:23.179 "is_configured": false, 00:14:23.179 "data_offset": 0, 00:14:23.179 "data_size": 63488 00:14:23.179 }, 00:14:23.179 { 00:14:23.179 "name": "BaseBdev3", 00:14:23.179 "uuid": "f8649a80-f981-412a-b9ed-31c07687c5f1", 00:14:23.179 "is_configured": true, 00:14:23.179 "data_offset": 2048, 00:14:23.179 "data_size": 63488 00:14:23.179 }, 00:14:23.179 { 00:14:23.179 "name": "BaseBdev4", 00:14:23.179 "uuid": "b7b0d710-e3b6-4075-9046-f270145cde55", 00:14:23.179 "is_configured": true, 00:14:23.179 "data_offset": 2048, 00:14:23.179 "data_size": 63488 00:14:23.179 } 00:14:23.179 ] 00:14:23.179 }' 00:14:23.179 01:15:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.179 01:15:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.439 [2024-10-15 01:15:36.077130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.439 "name": "Existed_Raid", 00:14:23.439 "uuid": "747c1786-0afc-44a1-b7b8-e605ec55be54", 00:14:23.439 "strip_size_kb": 64, 00:14:23.439 "state": "configuring", 00:14:23.439 "raid_level": "raid5f", 00:14:23.439 "superblock": true, 00:14:23.439 "num_base_bdevs": 4, 00:14:23.439 "num_base_bdevs_discovered": 2, 00:14:23.439 "num_base_bdevs_operational": 4, 00:14:23.439 "base_bdevs_list": [ 00:14:23.439 { 00:14:23.439 "name": "BaseBdev1", 00:14:23.439 "uuid": "5e7e3ee5-4d9b-4ad5-adcd-fbc1089e0707", 00:14:23.439 "is_configured": true, 00:14:23.439 "data_offset": 2048, 00:14:23.439 "data_size": 63488 00:14:23.439 }, 00:14:23.439 { 00:14:23.439 "name": null, 00:14:23.439 "uuid": "a7ffb38a-a363-4b41-a42f-4992dda48033", 00:14:23.439 "is_configured": false, 00:14:23.439 "data_offset": 0, 00:14:23.439 "data_size": 63488 00:14:23.439 }, 00:14:23.439 { 00:14:23.439 "name": null, 00:14:23.439 "uuid": "f8649a80-f981-412a-b9ed-31c07687c5f1", 00:14:23.439 "is_configured": false, 00:14:23.439 "data_offset": 0, 00:14:23.439 "data_size": 63488 00:14:23.439 }, 00:14:23.439 { 00:14:23.439 "name": "BaseBdev4", 00:14:23.439 "uuid": "b7b0d710-e3b6-4075-9046-f270145cde55", 00:14:23.439 "is_configured": true, 00:14:23.439 "data_offset": 2048, 00:14:23.439 "data_size": 63488 00:14:23.439 } 00:14:23.439 ] 00:14:23.439 }' 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.439 01:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.009 [2024-10-15 01:15:36.548344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.009 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.009 "name": "Existed_Raid", 00:14:24.009 "uuid": "747c1786-0afc-44a1-b7b8-e605ec55be54", 00:14:24.009 "strip_size_kb": 64, 00:14:24.009 "state": "configuring", 00:14:24.009 "raid_level": "raid5f", 00:14:24.009 "superblock": true, 00:14:24.009 "num_base_bdevs": 4, 00:14:24.009 "num_base_bdevs_discovered": 3, 00:14:24.009 "num_base_bdevs_operational": 4, 00:14:24.009 "base_bdevs_list": [ 00:14:24.009 { 00:14:24.009 "name": "BaseBdev1", 00:14:24.009 "uuid": "5e7e3ee5-4d9b-4ad5-adcd-fbc1089e0707", 00:14:24.009 "is_configured": true, 00:14:24.009 "data_offset": 2048, 00:14:24.009 "data_size": 63488 00:14:24.009 }, 00:14:24.009 { 00:14:24.009 "name": null, 00:14:24.009 "uuid": "a7ffb38a-a363-4b41-a42f-4992dda48033", 00:14:24.009 "is_configured": false, 00:14:24.010 "data_offset": 0, 00:14:24.010 "data_size": 63488 00:14:24.010 }, 00:14:24.010 { 00:14:24.010 "name": "BaseBdev3", 00:14:24.010 "uuid": "f8649a80-f981-412a-b9ed-31c07687c5f1", 00:14:24.010 "is_configured": true, 00:14:24.010 "data_offset": 2048, 00:14:24.010 "data_size": 63488 00:14:24.010 }, 00:14:24.010 { 00:14:24.010 "name": "BaseBdev4", 00:14:24.010 "uuid": "b7b0d710-e3b6-4075-9046-f270145cde55", 00:14:24.010 "is_configured": true, 00:14:24.010 "data_offset": 2048, 00:14:24.010 "data_size": 63488 00:14:24.010 } 00:14:24.010 ] 00:14:24.010 }' 00:14:24.010 01:15:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.010 01:15:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.580 [2024-10-15 01:15:37.071563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.580 "name": "Existed_Raid", 00:14:24.580 "uuid": "747c1786-0afc-44a1-b7b8-e605ec55be54", 00:14:24.580 "strip_size_kb": 64, 00:14:24.580 "state": "configuring", 00:14:24.580 "raid_level": "raid5f", 00:14:24.580 "superblock": true, 00:14:24.580 "num_base_bdevs": 4, 00:14:24.580 "num_base_bdevs_discovered": 2, 00:14:24.580 "num_base_bdevs_operational": 4, 00:14:24.580 "base_bdevs_list": [ 00:14:24.580 { 00:14:24.580 "name": null, 00:14:24.580 "uuid": "5e7e3ee5-4d9b-4ad5-adcd-fbc1089e0707", 00:14:24.580 "is_configured": false, 00:14:24.580 "data_offset": 0, 00:14:24.580 "data_size": 63488 00:14:24.580 }, 00:14:24.580 { 00:14:24.580 "name": null, 00:14:24.580 "uuid": "a7ffb38a-a363-4b41-a42f-4992dda48033", 00:14:24.580 "is_configured": false, 00:14:24.580 "data_offset": 0, 00:14:24.580 "data_size": 63488 00:14:24.580 }, 00:14:24.580 { 00:14:24.580 "name": "BaseBdev3", 00:14:24.580 "uuid": "f8649a80-f981-412a-b9ed-31c07687c5f1", 00:14:24.580 "is_configured": true, 00:14:24.580 "data_offset": 2048, 00:14:24.580 "data_size": 63488 00:14:24.580 }, 00:14:24.580 { 00:14:24.580 "name": "BaseBdev4", 00:14:24.580 "uuid": "b7b0d710-e3b6-4075-9046-f270145cde55", 00:14:24.580 "is_configured": true, 00:14:24.580 "data_offset": 2048, 00:14:24.580 "data_size": 63488 00:14:24.580 } 00:14:24.580 ] 00:14:24.580 }' 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.580 01:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.840 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.840 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:24.840 01:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.840 01:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.840 01:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.100 [2024-10-15 01:15:37.573102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.100 "name": "Existed_Raid", 00:14:25.100 "uuid": "747c1786-0afc-44a1-b7b8-e605ec55be54", 00:14:25.100 "strip_size_kb": 64, 00:14:25.100 "state": "configuring", 00:14:25.100 "raid_level": "raid5f", 00:14:25.100 "superblock": true, 00:14:25.100 "num_base_bdevs": 4, 00:14:25.100 "num_base_bdevs_discovered": 3, 00:14:25.100 "num_base_bdevs_operational": 4, 00:14:25.100 "base_bdevs_list": [ 00:14:25.100 { 00:14:25.100 "name": null, 00:14:25.100 "uuid": "5e7e3ee5-4d9b-4ad5-adcd-fbc1089e0707", 00:14:25.100 "is_configured": false, 00:14:25.100 "data_offset": 0, 00:14:25.100 "data_size": 63488 00:14:25.100 }, 00:14:25.100 { 00:14:25.100 "name": "BaseBdev2", 00:14:25.100 "uuid": "a7ffb38a-a363-4b41-a42f-4992dda48033", 00:14:25.100 "is_configured": true, 00:14:25.100 "data_offset": 2048, 00:14:25.100 "data_size": 63488 00:14:25.100 }, 00:14:25.100 { 00:14:25.100 "name": "BaseBdev3", 00:14:25.100 "uuid": "f8649a80-f981-412a-b9ed-31c07687c5f1", 00:14:25.100 "is_configured": true, 00:14:25.100 "data_offset": 2048, 00:14:25.100 "data_size": 63488 00:14:25.100 }, 00:14:25.100 { 00:14:25.100 "name": "BaseBdev4", 00:14:25.100 "uuid": "b7b0d710-e3b6-4075-9046-f270145cde55", 00:14:25.100 "is_configured": true, 00:14:25.100 "data_offset": 2048, 00:14:25.100 "data_size": 63488 00:14:25.100 } 00:14:25.100 ] 00:14:25.100 }' 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.100 01:15:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.366 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.366 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:25.366 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.366 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.366 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.366 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:25.366 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:25.366 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.366 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.366 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.630 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.630 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5e7e3ee5-4d9b-4ad5-adcd-fbc1089e0707 00:14:25.630 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.630 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.630 [2024-10-15 01:15:38.139060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:25.630 NewBaseBdev 00:14:25.630 [2024-10-15 01:15:38.139346] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:25.630 [2024-10-15 01:15:38.139380] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:25.630 [2024-10-15 01:15:38.139656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:14:25.630 [2024-10-15 01:15:38.140116] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:25.630 [2024-10-15 01:15:38.140131] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:14:25.630 [2024-10-15 01:15:38.140254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.630 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.630 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.631 [ 00:14:25.631 { 00:14:25.631 "name": "NewBaseBdev", 00:14:25.631 "aliases": [ 00:14:25.631 "5e7e3ee5-4d9b-4ad5-adcd-fbc1089e0707" 00:14:25.631 ], 00:14:25.631 "product_name": "Malloc disk", 00:14:25.631 "block_size": 512, 00:14:25.631 "num_blocks": 65536, 00:14:25.631 "uuid": "5e7e3ee5-4d9b-4ad5-adcd-fbc1089e0707", 00:14:25.631 "assigned_rate_limits": { 00:14:25.631 "rw_ios_per_sec": 0, 00:14:25.631 "rw_mbytes_per_sec": 0, 00:14:25.631 "r_mbytes_per_sec": 0, 00:14:25.631 "w_mbytes_per_sec": 0 00:14:25.631 }, 00:14:25.631 "claimed": true, 00:14:25.631 "claim_type": "exclusive_write", 00:14:25.631 "zoned": false, 00:14:25.631 "supported_io_types": { 00:14:25.631 "read": true, 00:14:25.631 "write": true, 00:14:25.631 "unmap": true, 00:14:25.631 "flush": true, 00:14:25.631 "reset": true, 00:14:25.631 "nvme_admin": false, 00:14:25.631 "nvme_io": false, 00:14:25.631 "nvme_io_md": false, 00:14:25.631 "write_zeroes": true, 00:14:25.631 "zcopy": true, 00:14:25.631 "get_zone_info": false, 00:14:25.631 "zone_management": false, 00:14:25.631 "zone_append": false, 00:14:25.631 "compare": false, 00:14:25.631 "compare_and_write": false, 00:14:25.631 "abort": true, 00:14:25.631 "seek_hole": false, 00:14:25.631 "seek_data": false, 00:14:25.631 "copy": true, 00:14:25.631 "nvme_iov_md": false 00:14:25.631 }, 00:14:25.631 "memory_domains": [ 00:14:25.631 { 00:14:25.631 "dma_device_id": "system", 00:14:25.631 "dma_device_type": 1 00:14:25.631 }, 00:14:25.631 { 00:14:25.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.631 "dma_device_type": 2 00:14:25.631 } 00:14:25.631 ], 00:14:25.631 "driver_specific": {} 00:14:25.631 } 00:14:25.631 ] 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.631 "name": "Existed_Raid", 00:14:25.631 "uuid": "747c1786-0afc-44a1-b7b8-e605ec55be54", 00:14:25.631 "strip_size_kb": 64, 00:14:25.631 "state": "online", 00:14:25.631 "raid_level": "raid5f", 00:14:25.631 "superblock": true, 00:14:25.631 "num_base_bdevs": 4, 00:14:25.631 "num_base_bdevs_discovered": 4, 00:14:25.631 "num_base_bdevs_operational": 4, 00:14:25.631 "base_bdevs_list": [ 00:14:25.631 { 00:14:25.631 "name": "NewBaseBdev", 00:14:25.631 "uuid": "5e7e3ee5-4d9b-4ad5-adcd-fbc1089e0707", 00:14:25.631 "is_configured": true, 00:14:25.631 "data_offset": 2048, 00:14:25.631 "data_size": 63488 00:14:25.631 }, 00:14:25.631 { 00:14:25.631 "name": "BaseBdev2", 00:14:25.631 "uuid": "a7ffb38a-a363-4b41-a42f-4992dda48033", 00:14:25.631 "is_configured": true, 00:14:25.631 "data_offset": 2048, 00:14:25.631 "data_size": 63488 00:14:25.631 }, 00:14:25.631 { 00:14:25.631 "name": "BaseBdev3", 00:14:25.631 "uuid": "f8649a80-f981-412a-b9ed-31c07687c5f1", 00:14:25.631 "is_configured": true, 00:14:25.631 "data_offset": 2048, 00:14:25.631 "data_size": 63488 00:14:25.631 }, 00:14:25.631 { 00:14:25.631 "name": "BaseBdev4", 00:14:25.631 "uuid": "b7b0d710-e3b6-4075-9046-f270145cde55", 00:14:25.631 "is_configured": true, 00:14:25.631 "data_offset": 2048, 00:14:25.631 "data_size": 63488 00:14:25.631 } 00:14:25.631 ] 00:14:25.631 }' 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.631 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.890 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:25.890 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:25.890 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:25.890 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:25.890 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:25.890 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:26.150 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:26.150 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:26.150 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.150 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.150 [2024-10-15 01:15:38.626492] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.150 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.150 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:26.150 "name": "Existed_Raid", 00:14:26.150 "aliases": [ 00:14:26.150 "747c1786-0afc-44a1-b7b8-e605ec55be54" 00:14:26.150 ], 00:14:26.150 "product_name": "Raid Volume", 00:14:26.150 "block_size": 512, 00:14:26.150 "num_blocks": 190464, 00:14:26.150 "uuid": "747c1786-0afc-44a1-b7b8-e605ec55be54", 00:14:26.150 "assigned_rate_limits": { 00:14:26.150 "rw_ios_per_sec": 0, 00:14:26.150 "rw_mbytes_per_sec": 0, 00:14:26.150 "r_mbytes_per_sec": 0, 00:14:26.150 "w_mbytes_per_sec": 0 00:14:26.150 }, 00:14:26.150 "claimed": false, 00:14:26.150 "zoned": false, 00:14:26.150 "supported_io_types": { 00:14:26.150 "read": true, 00:14:26.150 "write": true, 00:14:26.150 "unmap": false, 00:14:26.150 "flush": false, 00:14:26.150 "reset": true, 00:14:26.150 "nvme_admin": false, 00:14:26.150 "nvme_io": false, 00:14:26.150 "nvme_io_md": false, 00:14:26.150 "write_zeroes": true, 00:14:26.150 "zcopy": false, 00:14:26.150 "get_zone_info": false, 00:14:26.150 "zone_management": false, 00:14:26.150 "zone_append": false, 00:14:26.150 "compare": false, 00:14:26.150 "compare_and_write": false, 00:14:26.150 "abort": false, 00:14:26.150 "seek_hole": false, 00:14:26.150 "seek_data": false, 00:14:26.150 "copy": false, 00:14:26.150 "nvme_iov_md": false 00:14:26.150 }, 00:14:26.150 "driver_specific": { 00:14:26.150 "raid": { 00:14:26.150 "uuid": "747c1786-0afc-44a1-b7b8-e605ec55be54", 00:14:26.150 "strip_size_kb": 64, 00:14:26.150 "state": "online", 00:14:26.150 "raid_level": "raid5f", 00:14:26.150 "superblock": true, 00:14:26.150 "num_base_bdevs": 4, 00:14:26.150 "num_base_bdevs_discovered": 4, 00:14:26.150 "num_base_bdevs_operational": 4, 00:14:26.150 "base_bdevs_list": [ 00:14:26.150 { 00:14:26.150 "name": "NewBaseBdev", 00:14:26.150 "uuid": "5e7e3ee5-4d9b-4ad5-adcd-fbc1089e0707", 00:14:26.150 "is_configured": true, 00:14:26.150 "data_offset": 2048, 00:14:26.150 "data_size": 63488 00:14:26.150 }, 00:14:26.150 { 00:14:26.150 "name": "BaseBdev2", 00:14:26.150 "uuid": "a7ffb38a-a363-4b41-a42f-4992dda48033", 00:14:26.150 "is_configured": true, 00:14:26.150 "data_offset": 2048, 00:14:26.150 "data_size": 63488 00:14:26.150 }, 00:14:26.150 { 00:14:26.150 "name": "BaseBdev3", 00:14:26.150 "uuid": "f8649a80-f981-412a-b9ed-31c07687c5f1", 00:14:26.150 "is_configured": true, 00:14:26.150 "data_offset": 2048, 00:14:26.150 "data_size": 63488 00:14:26.150 }, 00:14:26.150 { 00:14:26.150 "name": "BaseBdev4", 00:14:26.150 "uuid": "b7b0d710-e3b6-4075-9046-f270145cde55", 00:14:26.150 "is_configured": true, 00:14:26.150 "data_offset": 2048, 00:14:26.150 "data_size": 63488 00:14:26.150 } 00:14:26.150 ] 00:14:26.150 } 00:14:26.150 } 00:14:26.150 }' 00:14:26.150 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:26.150 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:26.150 BaseBdev2 00:14:26.150 BaseBdev3 00:14:26.150 BaseBdev4' 00:14:26.150 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.150 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:26.150 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.150 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:26.150 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.150 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.150 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.150 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.151 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.151 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.151 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.151 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:26.151 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.151 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.151 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.151 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.411 [2024-10-15 01:15:38.981636] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:26.411 [2024-10-15 01:15:38.981706] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:26.411 [2024-10-15 01:15:38.981798] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:26.411 [2024-10-15 01:15:38.982090] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:26.411 [2024-10-15 01:15:38.982145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 93633 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 93633 ']' 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 93633 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:26.411 01:15:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93633 00:14:26.411 01:15:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:26.411 01:15:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:26.411 01:15:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93633' 00:14:26.411 killing process with pid 93633 00:14:26.411 01:15:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 93633 00:14:26.411 [2024-10-15 01:15:39.014643] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:26.411 01:15:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 93633 00:14:26.411 [2024-10-15 01:15:39.054568] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:26.671 01:15:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:26.671 ************************************ 00:14:26.671 END TEST raid5f_state_function_test_sb 00:14:26.671 ************************************ 00:14:26.671 00:14:26.671 real 0m9.610s 00:14:26.671 user 0m16.461s 00:14:26.672 sys 0m1.987s 00:14:26.672 01:15:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:26.672 01:15:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.672 01:15:39 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:14:26.672 01:15:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:26.672 01:15:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:26.672 01:15:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:26.672 ************************************ 00:14:26.672 START TEST raid5f_superblock_test 00:14:26.672 ************************************ 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94281 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94281 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 94281 ']' 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:26.672 01:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.932 [2024-10-15 01:15:39.420893] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:14:26.932 [2024-10-15 01:15:39.421020] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94281 ] 00:14:26.932 [2024-10-15 01:15:39.565750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.932 [2024-10-15 01:15:39.593001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.932 [2024-10-15 01:15:39.636361] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.932 [2024-10-15 01:15:39.636483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.872 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:27.872 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:27.872 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:27.872 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:27.872 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:27.872 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:27.872 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:27.872 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:27.872 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:27.872 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:27.872 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:27.872 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.872 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.872 malloc1 00:14:27.872 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.872 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:27.872 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.872 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.872 [2024-10-15 01:15:40.267689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:27.872 [2024-10-15 01:15:40.267797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.872 [2024-10-15 01:15:40.267838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:27.872 [2024-10-15 01:15:40.267870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.872 [2024-10-15 01:15:40.270002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.873 [2024-10-15 01:15:40.270076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:27.873 pt1 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.873 malloc2 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.873 [2024-10-15 01:15:40.300346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:27.873 [2024-10-15 01:15:40.300397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.873 [2024-10-15 01:15:40.300415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:27.873 [2024-10-15 01:15:40.300425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.873 [2024-10-15 01:15:40.302444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.873 [2024-10-15 01:15:40.302531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:27.873 pt2 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.873 malloc3 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.873 [2024-10-15 01:15:40.329132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:27.873 [2024-10-15 01:15:40.329259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.873 [2024-10-15 01:15:40.329294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:27.873 [2024-10-15 01:15:40.329345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.873 [2024-10-15 01:15:40.331386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.873 [2024-10-15 01:15:40.331456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:27.873 pt3 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.873 malloc4 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.873 [2024-10-15 01:15:40.377449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:27.873 [2024-10-15 01:15:40.377644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.873 [2024-10-15 01:15:40.377740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:27.873 [2024-10-15 01:15:40.377820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.873 [2024-10-15 01:15:40.381882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.873 [2024-10-15 01:15:40.381988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:27.873 pt4 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.873 [2024-10-15 01:15:40.390259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:27.873 [2024-10-15 01:15:40.392348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:27.873 [2024-10-15 01:15:40.392463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:27.873 [2024-10-15 01:15:40.392518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:27.873 [2024-10-15 01:15:40.392700] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:27.873 [2024-10-15 01:15:40.392716] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:27.873 [2024-10-15 01:15:40.392979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:27.873 [2024-10-15 01:15:40.393531] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:27.873 [2024-10-15 01:15:40.393597] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:27.873 [2024-10-15 01:15:40.393735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.873 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.873 "name": "raid_bdev1", 00:14:27.873 "uuid": "6c2dfcee-a441-4a01-a42a-3b8ed7da4261", 00:14:27.873 "strip_size_kb": 64, 00:14:27.873 "state": "online", 00:14:27.873 "raid_level": "raid5f", 00:14:27.873 "superblock": true, 00:14:27.873 "num_base_bdevs": 4, 00:14:27.873 "num_base_bdevs_discovered": 4, 00:14:27.873 "num_base_bdevs_operational": 4, 00:14:27.873 "base_bdevs_list": [ 00:14:27.873 { 00:14:27.873 "name": "pt1", 00:14:27.873 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:27.873 "is_configured": true, 00:14:27.873 "data_offset": 2048, 00:14:27.873 "data_size": 63488 00:14:27.873 }, 00:14:27.873 { 00:14:27.873 "name": "pt2", 00:14:27.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:27.873 "is_configured": true, 00:14:27.873 "data_offset": 2048, 00:14:27.873 "data_size": 63488 00:14:27.873 }, 00:14:27.873 { 00:14:27.873 "name": "pt3", 00:14:27.873 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:27.873 "is_configured": true, 00:14:27.873 "data_offset": 2048, 00:14:27.873 "data_size": 63488 00:14:27.873 }, 00:14:27.873 { 00:14:27.873 "name": "pt4", 00:14:27.873 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:27.873 "is_configured": true, 00:14:27.873 "data_offset": 2048, 00:14:27.874 "data_size": 63488 00:14:27.874 } 00:14:27.874 ] 00:14:27.874 }' 00:14:27.874 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.874 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.133 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:28.133 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:28.133 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:28.133 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:28.133 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:28.133 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:28.393 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:28.393 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:28.393 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.393 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.393 [2024-10-15 01:15:40.862983] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:28.393 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.393 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:28.393 "name": "raid_bdev1", 00:14:28.393 "aliases": [ 00:14:28.393 "6c2dfcee-a441-4a01-a42a-3b8ed7da4261" 00:14:28.393 ], 00:14:28.393 "product_name": "Raid Volume", 00:14:28.393 "block_size": 512, 00:14:28.393 "num_blocks": 190464, 00:14:28.393 "uuid": "6c2dfcee-a441-4a01-a42a-3b8ed7da4261", 00:14:28.393 "assigned_rate_limits": { 00:14:28.393 "rw_ios_per_sec": 0, 00:14:28.393 "rw_mbytes_per_sec": 0, 00:14:28.393 "r_mbytes_per_sec": 0, 00:14:28.393 "w_mbytes_per_sec": 0 00:14:28.393 }, 00:14:28.393 "claimed": false, 00:14:28.393 "zoned": false, 00:14:28.393 "supported_io_types": { 00:14:28.393 "read": true, 00:14:28.393 "write": true, 00:14:28.393 "unmap": false, 00:14:28.393 "flush": false, 00:14:28.393 "reset": true, 00:14:28.393 "nvme_admin": false, 00:14:28.393 "nvme_io": false, 00:14:28.393 "nvme_io_md": false, 00:14:28.393 "write_zeroes": true, 00:14:28.393 "zcopy": false, 00:14:28.393 "get_zone_info": false, 00:14:28.393 "zone_management": false, 00:14:28.393 "zone_append": false, 00:14:28.393 "compare": false, 00:14:28.393 "compare_and_write": false, 00:14:28.394 "abort": false, 00:14:28.394 "seek_hole": false, 00:14:28.394 "seek_data": false, 00:14:28.394 "copy": false, 00:14:28.394 "nvme_iov_md": false 00:14:28.394 }, 00:14:28.394 "driver_specific": { 00:14:28.394 "raid": { 00:14:28.394 "uuid": "6c2dfcee-a441-4a01-a42a-3b8ed7da4261", 00:14:28.394 "strip_size_kb": 64, 00:14:28.394 "state": "online", 00:14:28.394 "raid_level": "raid5f", 00:14:28.394 "superblock": true, 00:14:28.394 "num_base_bdevs": 4, 00:14:28.394 "num_base_bdevs_discovered": 4, 00:14:28.394 "num_base_bdevs_operational": 4, 00:14:28.394 "base_bdevs_list": [ 00:14:28.394 { 00:14:28.394 "name": "pt1", 00:14:28.394 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.394 "is_configured": true, 00:14:28.394 "data_offset": 2048, 00:14:28.394 "data_size": 63488 00:14:28.394 }, 00:14:28.394 { 00:14:28.394 "name": "pt2", 00:14:28.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.394 "is_configured": true, 00:14:28.394 "data_offset": 2048, 00:14:28.394 "data_size": 63488 00:14:28.394 }, 00:14:28.394 { 00:14:28.394 "name": "pt3", 00:14:28.394 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:28.394 "is_configured": true, 00:14:28.394 "data_offset": 2048, 00:14:28.394 "data_size": 63488 00:14:28.394 }, 00:14:28.394 { 00:14:28.394 "name": "pt4", 00:14:28.394 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:28.394 "is_configured": true, 00:14:28.394 "data_offset": 2048, 00:14:28.394 "data_size": 63488 00:14:28.394 } 00:14:28.394 ] 00:14:28.394 } 00:14:28.394 } 00:14:28.394 }' 00:14:28.394 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:28.394 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:28.394 pt2 00:14:28.394 pt3 00:14:28.394 pt4' 00:14:28.394 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.394 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:28.394 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:28.394 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.394 01:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:28.394 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.394 01:15:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.394 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.654 [2024-10-15 01:15:41.170453] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6c2dfcee-a441-4a01-a42a-3b8ed7da4261 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6c2dfcee-a441-4a01-a42a-3b8ed7da4261 ']' 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.654 [2024-10-15 01:15:41.218165] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:28.654 [2024-10-15 01:15:41.218278] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:28.654 [2024-10-15 01:15:41.218396] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:28.654 [2024-10-15 01:15:41.218501] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:28.654 [2024-10-15 01:15:41.218581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.654 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.655 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.655 [2024-10-15 01:15:41.377925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:28.915 [2024-10-15 01:15:41.380026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:28.915 [2024-10-15 01:15:41.380121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:28.915 [2024-10-15 01:15:41.380171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:28.915 [2024-10-15 01:15:41.380273] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:28.915 [2024-10-15 01:15:41.380377] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:28.916 [2024-10-15 01:15:41.380444] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:28.916 [2024-10-15 01:15:41.380495] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:28.916 [2024-10-15 01:15:41.380547] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:28.916 [2024-10-15 01:15:41.380583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:14:28.916 request: 00:14:28.916 { 00:14:28.916 "name": "raid_bdev1", 00:14:28.916 "raid_level": "raid5f", 00:14:28.916 "base_bdevs": [ 00:14:28.916 "malloc1", 00:14:28.916 "malloc2", 00:14:28.916 "malloc3", 00:14:28.916 "malloc4" 00:14:28.916 ], 00:14:28.916 "strip_size_kb": 64, 00:14:28.916 "superblock": false, 00:14:28.916 "method": "bdev_raid_create", 00:14:28.916 "req_id": 1 00:14:28.916 } 00:14:28.916 Got JSON-RPC error response 00:14:28.916 response: 00:14:28.916 { 00:14:28.916 "code": -17, 00:14:28.916 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:28.916 } 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.916 [2024-10-15 01:15:41.441762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:28.916 [2024-10-15 01:15:41.441901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.916 [2024-10-15 01:15:41.441936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:28.916 [2024-10-15 01:15:41.441945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.916 [2024-10-15 01:15:41.444172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.916 [2024-10-15 01:15:41.444219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:28.916 [2024-10-15 01:15:41.444305] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:28.916 [2024-10-15 01:15:41.444357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:28.916 pt1 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.916 "name": "raid_bdev1", 00:14:28.916 "uuid": "6c2dfcee-a441-4a01-a42a-3b8ed7da4261", 00:14:28.916 "strip_size_kb": 64, 00:14:28.916 "state": "configuring", 00:14:28.916 "raid_level": "raid5f", 00:14:28.916 "superblock": true, 00:14:28.916 "num_base_bdevs": 4, 00:14:28.916 "num_base_bdevs_discovered": 1, 00:14:28.916 "num_base_bdevs_operational": 4, 00:14:28.916 "base_bdevs_list": [ 00:14:28.916 { 00:14:28.916 "name": "pt1", 00:14:28.916 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.916 "is_configured": true, 00:14:28.916 "data_offset": 2048, 00:14:28.916 "data_size": 63488 00:14:28.916 }, 00:14:28.916 { 00:14:28.916 "name": null, 00:14:28.916 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.916 "is_configured": false, 00:14:28.916 "data_offset": 2048, 00:14:28.916 "data_size": 63488 00:14:28.916 }, 00:14:28.916 { 00:14:28.916 "name": null, 00:14:28.916 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:28.916 "is_configured": false, 00:14:28.916 "data_offset": 2048, 00:14:28.916 "data_size": 63488 00:14:28.916 }, 00:14:28.916 { 00:14:28.916 "name": null, 00:14:28.916 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:28.916 "is_configured": false, 00:14:28.916 "data_offset": 2048, 00:14:28.916 "data_size": 63488 00:14:28.916 } 00:14:28.916 ] 00:14:28.916 }' 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.916 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.176 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:29.176 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:29.176 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.176 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.436 [2024-10-15 01:15:41.901009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:29.436 [2024-10-15 01:15:41.901149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.436 [2024-10-15 01:15:41.901207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:29.436 [2024-10-15 01:15:41.901243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.436 [2024-10-15 01:15:41.901714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.436 [2024-10-15 01:15:41.901777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:29.436 [2024-10-15 01:15:41.901894] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:29.436 [2024-10-15 01:15:41.901949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:29.436 pt2 00:14:29.436 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.436 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:29.436 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.437 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.437 [2024-10-15 01:15:41.913015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:29.437 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.437 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:29.437 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.437 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.437 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.437 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.437 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.437 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.437 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.437 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.437 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.437 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.437 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.437 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.437 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.437 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.437 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.437 "name": "raid_bdev1", 00:14:29.437 "uuid": "6c2dfcee-a441-4a01-a42a-3b8ed7da4261", 00:14:29.437 "strip_size_kb": 64, 00:14:29.437 "state": "configuring", 00:14:29.437 "raid_level": "raid5f", 00:14:29.437 "superblock": true, 00:14:29.437 "num_base_bdevs": 4, 00:14:29.437 "num_base_bdevs_discovered": 1, 00:14:29.437 "num_base_bdevs_operational": 4, 00:14:29.437 "base_bdevs_list": [ 00:14:29.437 { 00:14:29.437 "name": "pt1", 00:14:29.437 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.437 "is_configured": true, 00:14:29.437 "data_offset": 2048, 00:14:29.437 "data_size": 63488 00:14:29.437 }, 00:14:29.437 { 00:14:29.437 "name": null, 00:14:29.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.437 "is_configured": false, 00:14:29.437 "data_offset": 0, 00:14:29.437 "data_size": 63488 00:14:29.437 }, 00:14:29.437 { 00:14:29.437 "name": null, 00:14:29.437 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:29.437 "is_configured": false, 00:14:29.437 "data_offset": 2048, 00:14:29.437 "data_size": 63488 00:14:29.437 }, 00:14:29.437 { 00:14:29.437 "name": null, 00:14:29.437 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:29.437 "is_configured": false, 00:14:29.437 "data_offset": 2048, 00:14:29.437 "data_size": 63488 00:14:29.437 } 00:14:29.437 ] 00:14:29.437 }' 00:14:29.437 01:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.437 01:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.698 [2024-10-15 01:15:42.360256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:29.698 [2024-10-15 01:15:42.360347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.698 [2024-10-15 01:15:42.360369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:29.698 [2024-10-15 01:15:42.360379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.698 [2024-10-15 01:15:42.360784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.698 [2024-10-15 01:15:42.360803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:29.698 [2024-10-15 01:15:42.360879] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:29.698 [2024-10-15 01:15:42.360902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:29.698 pt2 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.698 [2024-10-15 01:15:42.372197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:29.698 [2024-10-15 01:15:42.372280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.698 [2024-10-15 01:15:42.372303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:29.698 [2024-10-15 01:15:42.372313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.698 [2024-10-15 01:15:42.372716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.698 [2024-10-15 01:15:42.372734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:29.698 [2024-10-15 01:15:42.372806] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:29.698 [2024-10-15 01:15:42.372829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:29.698 pt3 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.698 [2024-10-15 01:15:42.384228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:29.698 [2024-10-15 01:15:42.384292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.698 [2024-10-15 01:15:42.384311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:29.698 [2024-10-15 01:15:42.384321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.698 [2024-10-15 01:15:42.384682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.698 [2024-10-15 01:15:42.384701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:29.698 [2024-10-15 01:15:42.384773] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:29.698 [2024-10-15 01:15:42.384795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:29.698 [2024-10-15 01:15:42.384905] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:29.698 [2024-10-15 01:15:42.384923] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:29.698 [2024-10-15 01:15:42.385149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:29.698 [2024-10-15 01:15:42.385612] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:29.698 [2024-10-15 01:15:42.385623] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:14:29.698 [2024-10-15 01:15:42.385748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.698 pt4 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.698 01:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.958 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.958 "name": "raid_bdev1", 00:14:29.958 "uuid": "6c2dfcee-a441-4a01-a42a-3b8ed7da4261", 00:14:29.958 "strip_size_kb": 64, 00:14:29.958 "state": "online", 00:14:29.958 "raid_level": "raid5f", 00:14:29.958 "superblock": true, 00:14:29.958 "num_base_bdevs": 4, 00:14:29.958 "num_base_bdevs_discovered": 4, 00:14:29.958 "num_base_bdevs_operational": 4, 00:14:29.958 "base_bdevs_list": [ 00:14:29.958 { 00:14:29.958 "name": "pt1", 00:14:29.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.958 "is_configured": true, 00:14:29.958 "data_offset": 2048, 00:14:29.958 "data_size": 63488 00:14:29.959 }, 00:14:29.959 { 00:14:29.959 "name": "pt2", 00:14:29.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.959 "is_configured": true, 00:14:29.959 "data_offset": 2048, 00:14:29.959 "data_size": 63488 00:14:29.959 }, 00:14:29.959 { 00:14:29.959 "name": "pt3", 00:14:29.959 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:29.959 "is_configured": true, 00:14:29.959 "data_offset": 2048, 00:14:29.959 "data_size": 63488 00:14:29.959 }, 00:14:29.959 { 00:14:29.959 "name": "pt4", 00:14:29.959 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:29.959 "is_configured": true, 00:14:29.959 "data_offset": 2048, 00:14:29.959 "data_size": 63488 00:14:29.959 } 00:14:29.959 ] 00:14:29.959 }' 00:14:29.959 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.959 01:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.219 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:30.219 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:30.219 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:30.219 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:30.219 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:30.219 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:30.219 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:30.219 01:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.219 01:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.219 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:30.219 [2024-10-15 01:15:42.859794] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.219 01:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.219 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:30.219 "name": "raid_bdev1", 00:14:30.219 "aliases": [ 00:14:30.219 "6c2dfcee-a441-4a01-a42a-3b8ed7da4261" 00:14:30.219 ], 00:14:30.219 "product_name": "Raid Volume", 00:14:30.219 "block_size": 512, 00:14:30.219 "num_blocks": 190464, 00:14:30.219 "uuid": "6c2dfcee-a441-4a01-a42a-3b8ed7da4261", 00:14:30.219 "assigned_rate_limits": { 00:14:30.219 "rw_ios_per_sec": 0, 00:14:30.219 "rw_mbytes_per_sec": 0, 00:14:30.219 "r_mbytes_per_sec": 0, 00:14:30.219 "w_mbytes_per_sec": 0 00:14:30.219 }, 00:14:30.219 "claimed": false, 00:14:30.219 "zoned": false, 00:14:30.219 "supported_io_types": { 00:14:30.219 "read": true, 00:14:30.219 "write": true, 00:14:30.219 "unmap": false, 00:14:30.219 "flush": false, 00:14:30.219 "reset": true, 00:14:30.219 "nvme_admin": false, 00:14:30.219 "nvme_io": false, 00:14:30.219 "nvme_io_md": false, 00:14:30.219 "write_zeroes": true, 00:14:30.219 "zcopy": false, 00:14:30.219 "get_zone_info": false, 00:14:30.219 "zone_management": false, 00:14:30.219 "zone_append": false, 00:14:30.219 "compare": false, 00:14:30.219 "compare_and_write": false, 00:14:30.219 "abort": false, 00:14:30.219 "seek_hole": false, 00:14:30.219 "seek_data": false, 00:14:30.219 "copy": false, 00:14:30.219 "nvme_iov_md": false 00:14:30.219 }, 00:14:30.219 "driver_specific": { 00:14:30.219 "raid": { 00:14:30.219 "uuid": "6c2dfcee-a441-4a01-a42a-3b8ed7da4261", 00:14:30.219 "strip_size_kb": 64, 00:14:30.219 "state": "online", 00:14:30.219 "raid_level": "raid5f", 00:14:30.219 "superblock": true, 00:14:30.219 "num_base_bdevs": 4, 00:14:30.219 "num_base_bdevs_discovered": 4, 00:14:30.219 "num_base_bdevs_operational": 4, 00:14:30.219 "base_bdevs_list": [ 00:14:30.219 { 00:14:30.219 "name": "pt1", 00:14:30.219 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:30.219 "is_configured": true, 00:14:30.219 "data_offset": 2048, 00:14:30.219 "data_size": 63488 00:14:30.219 }, 00:14:30.219 { 00:14:30.219 "name": "pt2", 00:14:30.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.219 "is_configured": true, 00:14:30.219 "data_offset": 2048, 00:14:30.219 "data_size": 63488 00:14:30.219 }, 00:14:30.219 { 00:14:30.219 "name": "pt3", 00:14:30.219 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:30.219 "is_configured": true, 00:14:30.219 "data_offset": 2048, 00:14:30.219 "data_size": 63488 00:14:30.219 }, 00:14:30.219 { 00:14:30.219 "name": "pt4", 00:14:30.219 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:30.219 "is_configured": true, 00:14:30.219 "data_offset": 2048, 00:14:30.219 "data_size": 63488 00:14:30.219 } 00:14:30.219 ] 00:14:30.219 } 00:14:30.219 } 00:14:30.219 }' 00:14:30.219 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:30.219 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:30.219 pt2 00:14:30.219 pt3 00:14:30.219 pt4' 00:14:30.219 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.483 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:30.483 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.483 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.483 01:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:30.483 01:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.483 01:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:30.483 [2024-10-15 01:15:43.191219] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.483 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.748 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6c2dfcee-a441-4a01-a42a-3b8ed7da4261 '!=' 6c2dfcee-a441-4a01-a42a-3b8ed7da4261 ']' 00:14:30.748 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:30.748 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:30.748 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:30.748 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:30.748 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.748 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.748 [2024-10-15 01:15:43.242960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:30.748 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.748 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:30.748 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.748 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.748 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.748 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.748 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.748 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.748 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.748 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.749 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.749 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.749 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.749 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.749 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.749 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.749 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.749 "name": "raid_bdev1", 00:14:30.749 "uuid": "6c2dfcee-a441-4a01-a42a-3b8ed7da4261", 00:14:30.749 "strip_size_kb": 64, 00:14:30.749 "state": "online", 00:14:30.749 "raid_level": "raid5f", 00:14:30.749 "superblock": true, 00:14:30.749 "num_base_bdevs": 4, 00:14:30.749 "num_base_bdevs_discovered": 3, 00:14:30.749 "num_base_bdevs_operational": 3, 00:14:30.749 "base_bdevs_list": [ 00:14:30.749 { 00:14:30.749 "name": null, 00:14:30.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.749 "is_configured": false, 00:14:30.749 "data_offset": 0, 00:14:30.749 "data_size": 63488 00:14:30.749 }, 00:14:30.749 { 00:14:30.749 "name": "pt2", 00:14:30.749 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.749 "is_configured": true, 00:14:30.749 "data_offset": 2048, 00:14:30.749 "data_size": 63488 00:14:30.749 }, 00:14:30.749 { 00:14:30.749 "name": "pt3", 00:14:30.749 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:30.749 "is_configured": true, 00:14:30.749 "data_offset": 2048, 00:14:30.749 "data_size": 63488 00:14:30.749 }, 00:14:30.749 { 00:14:30.749 "name": "pt4", 00:14:30.749 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:30.749 "is_configured": true, 00:14:30.749 "data_offset": 2048, 00:14:30.749 "data_size": 63488 00:14:30.749 } 00:14:30.749 ] 00:14:30.749 }' 00:14:30.749 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.749 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.008 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:31.008 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.008 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.008 [2024-10-15 01:15:43.690161] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.008 [2024-10-15 01:15:43.690300] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.008 [2024-10-15 01:15:43.690419] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.008 [2024-10-15 01:15:43.690515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.008 [2024-10-15 01:15:43.690571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:14:31.008 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.008 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.008 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.008 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.008 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:31.008 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.268 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:31.268 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:31.268 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:31.268 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.269 [2024-10-15 01:15:43.789943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:31.269 [2024-10-15 01:15:43.790046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.269 [2024-10-15 01:15:43.790068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:31.269 [2024-10-15 01:15:43.790079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.269 [2024-10-15 01:15:43.792399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.269 [2024-10-15 01:15:43.792475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:31.269 [2024-10-15 01:15:43.792579] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:31.269 [2024-10-15 01:15:43.792652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:31.269 pt2 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.269 "name": "raid_bdev1", 00:14:31.269 "uuid": "6c2dfcee-a441-4a01-a42a-3b8ed7da4261", 00:14:31.269 "strip_size_kb": 64, 00:14:31.269 "state": "configuring", 00:14:31.269 "raid_level": "raid5f", 00:14:31.269 "superblock": true, 00:14:31.269 "num_base_bdevs": 4, 00:14:31.269 "num_base_bdevs_discovered": 1, 00:14:31.269 "num_base_bdevs_operational": 3, 00:14:31.269 "base_bdevs_list": [ 00:14:31.269 { 00:14:31.269 "name": null, 00:14:31.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.269 "is_configured": false, 00:14:31.269 "data_offset": 2048, 00:14:31.269 "data_size": 63488 00:14:31.269 }, 00:14:31.269 { 00:14:31.269 "name": "pt2", 00:14:31.269 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.269 "is_configured": true, 00:14:31.269 "data_offset": 2048, 00:14:31.269 "data_size": 63488 00:14:31.269 }, 00:14:31.269 { 00:14:31.269 "name": null, 00:14:31.269 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:31.269 "is_configured": false, 00:14:31.269 "data_offset": 2048, 00:14:31.269 "data_size": 63488 00:14:31.269 }, 00:14:31.269 { 00:14:31.269 "name": null, 00:14:31.269 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:31.269 "is_configured": false, 00:14:31.269 "data_offset": 2048, 00:14:31.269 "data_size": 63488 00:14:31.269 } 00:14:31.269 ] 00:14:31.269 }' 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.269 01:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.529 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:31.529 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:31.529 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:31.529 01:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.529 01:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.529 [2024-10-15 01:15:44.225278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:31.529 [2024-10-15 01:15:44.225406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.529 [2024-10-15 01:15:44.225443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:31.529 [2024-10-15 01:15:44.225476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.529 [2024-10-15 01:15:44.225892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.529 [2024-10-15 01:15:44.225914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:31.529 [2024-10-15 01:15:44.225995] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:31.529 [2024-10-15 01:15:44.226022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:31.529 pt3 00:14:31.529 01:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.529 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:31.529 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.529 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.529 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.529 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.529 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.529 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.529 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.529 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.529 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.529 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.529 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.529 01:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.529 01:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.789 01:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.789 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.789 "name": "raid_bdev1", 00:14:31.789 "uuid": "6c2dfcee-a441-4a01-a42a-3b8ed7da4261", 00:14:31.789 "strip_size_kb": 64, 00:14:31.789 "state": "configuring", 00:14:31.789 "raid_level": "raid5f", 00:14:31.789 "superblock": true, 00:14:31.789 "num_base_bdevs": 4, 00:14:31.789 "num_base_bdevs_discovered": 2, 00:14:31.789 "num_base_bdevs_operational": 3, 00:14:31.789 "base_bdevs_list": [ 00:14:31.789 { 00:14:31.789 "name": null, 00:14:31.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.789 "is_configured": false, 00:14:31.789 "data_offset": 2048, 00:14:31.789 "data_size": 63488 00:14:31.789 }, 00:14:31.789 { 00:14:31.789 "name": "pt2", 00:14:31.789 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.789 "is_configured": true, 00:14:31.789 "data_offset": 2048, 00:14:31.789 "data_size": 63488 00:14:31.789 }, 00:14:31.789 { 00:14:31.789 "name": "pt3", 00:14:31.789 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:31.789 "is_configured": true, 00:14:31.789 "data_offset": 2048, 00:14:31.789 "data_size": 63488 00:14:31.789 }, 00:14:31.789 { 00:14:31.789 "name": null, 00:14:31.789 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:31.789 "is_configured": false, 00:14:31.789 "data_offset": 2048, 00:14:31.789 "data_size": 63488 00:14:31.789 } 00:14:31.789 ] 00:14:31.789 }' 00:14:31.789 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.789 01:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.048 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:32.048 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:32.048 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:32.048 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:32.048 01:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.048 01:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.049 [2024-10-15 01:15:44.620573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:32.049 [2024-10-15 01:15:44.620709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.049 [2024-10-15 01:15:44.620752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:32.049 [2024-10-15 01:15:44.620784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.049 [2024-10-15 01:15:44.621249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.049 [2024-10-15 01:15:44.621312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:32.049 [2024-10-15 01:15:44.621422] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:32.049 [2024-10-15 01:15:44.621475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:32.049 [2024-10-15 01:15:44.621612] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:32.049 [2024-10-15 01:15:44.621652] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:32.049 [2024-10-15 01:15:44.621912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:14:32.049 [2024-10-15 01:15:44.622511] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:32.049 [2024-10-15 01:15:44.622563] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:14:32.049 [2024-10-15 01:15:44.622859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.049 pt4 00:14:32.049 01:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.049 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:32.049 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.049 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.049 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.049 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.049 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.049 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.049 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.049 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.049 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.049 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.049 01:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.049 01:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.049 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.049 01:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.049 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.049 "name": "raid_bdev1", 00:14:32.049 "uuid": "6c2dfcee-a441-4a01-a42a-3b8ed7da4261", 00:14:32.049 "strip_size_kb": 64, 00:14:32.049 "state": "online", 00:14:32.049 "raid_level": "raid5f", 00:14:32.049 "superblock": true, 00:14:32.049 "num_base_bdevs": 4, 00:14:32.049 "num_base_bdevs_discovered": 3, 00:14:32.049 "num_base_bdevs_operational": 3, 00:14:32.049 "base_bdevs_list": [ 00:14:32.049 { 00:14:32.049 "name": null, 00:14:32.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.049 "is_configured": false, 00:14:32.049 "data_offset": 2048, 00:14:32.049 "data_size": 63488 00:14:32.049 }, 00:14:32.049 { 00:14:32.049 "name": "pt2", 00:14:32.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:32.049 "is_configured": true, 00:14:32.049 "data_offset": 2048, 00:14:32.049 "data_size": 63488 00:14:32.049 }, 00:14:32.049 { 00:14:32.049 "name": "pt3", 00:14:32.049 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:32.049 "is_configured": true, 00:14:32.049 "data_offset": 2048, 00:14:32.049 "data_size": 63488 00:14:32.049 }, 00:14:32.049 { 00:14:32.049 "name": "pt4", 00:14:32.049 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:32.049 "is_configured": true, 00:14:32.049 "data_offset": 2048, 00:14:32.049 "data_size": 63488 00:14:32.049 } 00:14:32.049 ] 00:14:32.049 }' 00:14:32.049 01:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.049 01:15:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.617 [2024-10-15 01:15:45.055919] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:32.617 [2024-10-15 01:15:45.055955] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.617 [2024-10-15 01:15:45.056050] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.617 [2024-10-15 01:15:45.056138] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.617 [2024-10-15 01:15:45.056150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.617 [2024-10-15 01:15:45.131787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:32.617 [2024-10-15 01:15:45.131896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.617 [2024-10-15 01:15:45.131935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:32.617 [2024-10-15 01:15:45.131963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.617 [2024-10-15 01:15:45.134431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.617 [2024-10-15 01:15:45.134508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:32.617 [2024-10-15 01:15:45.134622] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:32.617 [2024-10-15 01:15:45.134677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:32.617 [2024-10-15 01:15:45.134816] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:32.617 [2024-10-15 01:15:45.134873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:32.617 [2024-10-15 01:15:45.134928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:14:32.617 [2024-10-15 01:15:45.135008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:32.617 [2024-10-15 01:15:45.135147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:32.617 pt1 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.617 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.617 "name": "raid_bdev1", 00:14:32.617 "uuid": "6c2dfcee-a441-4a01-a42a-3b8ed7da4261", 00:14:32.617 "strip_size_kb": 64, 00:14:32.617 "state": "configuring", 00:14:32.617 "raid_level": "raid5f", 00:14:32.617 "superblock": true, 00:14:32.617 "num_base_bdevs": 4, 00:14:32.617 "num_base_bdevs_discovered": 2, 00:14:32.617 "num_base_bdevs_operational": 3, 00:14:32.617 "base_bdevs_list": [ 00:14:32.617 { 00:14:32.617 "name": null, 00:14:32.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.617 "is_configured": false, 00:14:32.617 "data_offset": 2048, 00:14:32.617 "data_size": 63488 00:14:32.617 }, 00:14:32.617 { 00:14:32.617 "name": "pt2", 00:14:32.617 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:32.617 "is_configured": true, 00:14:32.617 "data_offset": 2048, 00:14:32.617 "data_size": 63488 00:14:32.617 }, 00:14:32.617 { 00:14:32.617 "name": "pt3", 00:14:32.617 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:32.618 "is_configured": true, 00:14:32.618 "data_offset": 2048, 00:14:32.618 "data_size": 63488 00:14:32.618 }, 00:14:32.618 { 00:14:32.618 "name": null, 00:14:32.618 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:32.618 "is_configured": false, 00:14:32.618 "data_offset": 2048, 00:14:32.618 "data_size": 63488 00:14:32.618 } 00:14:32.618 ] 00:14:32.618 }' 00:14:32.618 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.618 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.876 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:32.876 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.876 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.876 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:32.876 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.136 [2024-10-15 01:15:45.638953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:33.136 [2024-10-15 01:15:45.639122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.136 [2024-10-15 01:15:45.639150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:33.136 [2024-10-15 01:15:45.639164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.136 [2024-10-15 01:15:45.639620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.136 [2024-10-15 01:15:45.639643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:33.136 [2024-10-15 01:15:45.639728] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:33.136 [2024-10-15 01:15:45.639765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:33.136 [2024-10-15 01:15:45.639889] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:14:33.136 [2024-10-15 01:15:45.639902] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:33.136 [2024-10-15 01:15:45.640173] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:14:33.136 [2024-10-15 01:15:45.640811] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:14:33.136 [2024-10-15 01:15:45.640839] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:14:33.136 [2024-10-15 01:15:45.641045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.136 pt4 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.136 "name": "raid_bdev1", 00:14:33.136 "uuid": "6c2dfcee-a441-4a01-a42a-3b8ed7da4261", 00:14:33.136 "strip_size_kb": 64, 00:14:33.136 "state": "online", 00:14:33.136 "raid_level": "raid5f", 00:14:33.136 "superblock": true, 00:14:33.136 "num_base_bdevs": 4, 00:14:33.136 "num_base_bdevs_discovered": 3, 00:14:33.136 "num_base_bdevs_operational": 3, 00:14:33.136 "base_bdevs_list": [ 00:14:33.136 { 00:14:33.136 "name": null, 00:14:33.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.136 "is_configured": false, 00:14:33.136 "data_offset": 2048, 00:14:33.136 "data_size": 63488 00:14:33.136 }, 00:14:33.136 { 00:14:33.136 "name": "pt2", 00:14:33.136 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:33.136 "is_configured": true, 00:14:33.136 "data_offset": 2048, 00:14:33.136 "data_size": 63488 00:14:33.136 }, 00:14:33.136 { 00:14:33.136 "name": "pt3", 00:14:33.136 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:33.136 "is_configured": true, 00:14:33.136 "data_offset": 2048, 00:14:33.136 "data_size": 63488 00:14:33.136 }, 00:14:33.136 { 00:14:33.136 "name": "pt4", 00:14:33.136 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:33.136 "is_configured": true, 00:14:33.136 "data_offset": 2048, 00:14:33.136 "data_size": 63488 00:14:33.136 } 00:14:33.136 ] 00:14:33.136 }' 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.136 01:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:33.706 [2024-10-15 01:15:46.182250] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 6c2dfcee-a441-4a01-a42a-3b8ed7da4261 '!=' 6c2dfcee-a441-4a01-a42a-3b8ed7da4261 ']' 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94281 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 94281 ']' 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 94281 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94281 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:33.706 killing process with pid 94281 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94281' 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 94281 00:14:33.706 [2024-10-15 01:15:46.259953] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:33.706 [2024-10-15 01:15:46.260066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.706 01:15:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 94281 00:14:33.706 [2024-10-15 01:15:46.260154] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:33.706 [2024-10-15 01:15:46.260166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:14:33.706 [2024-10-15 01:15:46.304704] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.966 ************************************ 00:14:33.966 END TEST raid5f_superblock_test 00:14:33.966 ************************************ 00:14:33.967 01:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:33.967 00:14:33.967 real 0m7.187s 00:14:33.967 user 0m12.095s 00:14:33.967 sys 0m1.565s 00:14:33.967 01:15:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:33.967 01:15:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.967 01:15:46 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:33.967 01:15:46 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:14:33.967 01:15:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:33.967 01:15:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:33.967 01:15:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:33.967 ************************************ 00:14:33.967 START TEST raid5f_rebuild_test 00:14:33.967 ************************************ 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=94754 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 94754 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 94754 ']' 00:14:33.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:33.967 01:15:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.967 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:33.967 Zero copy mechanism will not be used. 00:14:33.967 [2024-10-15 01:15:46.669036] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:14:33.967 [2024-10-15 01:15:46.669167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94754 ] 00:14:34.227 [2024-10-15 01:15:46.816133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.227 [2024-10-15 01:15:46.846521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.227 [2024-10-15 01:15:46.888877] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.227 [2024-10-15 01:15:46.888914] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.168 BaseBdev1_malloc 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.168 [2024-10-15 01:15:47.551376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:35.168 [2024-10-15 01:15:47.551449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.168 [2024-10-15 01:15:47.551473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:35.168 [2024-10-15 01:15:47.551485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.168 [2024-10-15 01:15:47.553664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.168 [2024-10-15 01:15:47.553703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:35.168 BaseBdev1 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.168 BaseBdev2_malloc 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.168 [2024-10-15 01:15:47.580089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:35.168 [2024-10-15 01:15:47.580158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.168 [2024-10-15 01:15:47.580192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:35.168 [2024-10-15 01:15:47.580202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.168 [2024-10-15 01:15:47.582327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.168 [2024-10-15 01:15:47.582415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:35.168 BaseBdev2 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.168 BaseBdev3_malloc 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.168 [2024-10-15 01:15:47.608840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:35.168 [2024-10-15 01:15:47.608909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.168 [2024-10-15 01:15:47.608936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:35.168 [2024-10-15 01:15:47.608945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.168 [2024-10-15 01:15:47.611033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.168 [2024-10-15 01:15:47.611075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:35.168 BaseBdev3 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.168 BaseBdev4_malloc 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.168 [2024-10-15 01:15:47.645681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:35.168 [2024-10-15 01:15:47.645814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.168 [2024-10-15 01:15:47.645849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:35.168 [2024-10-15 01:15:47.645859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.168 [2024-10-15 01:15:47.648050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.168 [2024-10-15 01:15:47.648089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:35.168 BaseBdev4 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.168 spare_malloc 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.168 spare_delay 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.168 [2024-10-15 01:15:47.686439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:35.168 [2024-10-15 01:15:47.686496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.168 [2024-10-15 01:15:47.686523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:35.168 [2024-10-15 01:15:47.686532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.168 [2024-10-15 01:15:47.688680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.168 [2024-10-15 01:15:47.688772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:35.168 spare 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.168 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.168 [2024-10-15 01:15:47.698497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.168 [2024-10-15 01:15:47.700366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:35.168 [2024-10-15 01:15:47.700433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:35.168 [2024-10-15 01:15:47.700485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:35.169 [2024-10-15 01:15:47.700582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:35.169 [2024-10-15 01:15:47.700591] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:35.169 [2024-10-15 01:15:47.700884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:35.169 [2024-10-15 01:15:47.701391] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:35.169 [2024-10-15 01:15:47.701411] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:35.169 [2024-10-15 01:15:47.701539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.169 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.169 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:35.169 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.169 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.169 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.169 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.169 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:35.169 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.169 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.169 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.169 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.169 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.169 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.169 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.169 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.169 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.169 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.169 "name": "raid_bdev1", 00:14:35.169 "uuid": "40556177-b9d4-40a5-8ad2-48e7a5c50ed4", 00:14:35.169 "strip_size_kb": 64, 00:14:35.169 "state": "online", 00:14:35.169 "raid_level": "raid5f", 00:14:35.169 "superblock": false, 00:14:35.169 "num_base_bdevs": 4, 00:14:35.169 "num_base_bdevs_discovered": 4, 00:14:35.169 "num_base_bdevs_operational": 4, 00:14:35.169 "base_bdevs_list": [ 00:14:35.169 { 00:14:35.169 "name": "BaseBdev1", 00:14:35.169 "uuid": "65f8cf47-7668-5d2b-b26a-ad2a0bc9e1f9", 00:14:35.169 "is_configured": true, 00:14:35.169 "data_offset": 0, 00:14:35.169 "data_size": 65536 00:14:35.169 }, 00:14:35.169 { 00:14:35.169 "name": "BaseBdev2", 00:14:35.169 "uuid": "960efc4f-015c-5756-a704-0651d009e75f", 00:14:35.169 "is_configured": true, 00:14:35.169 "data_offset": 0, 00:14:35.169 "data_size": 65536 00:14:35.169 }, 00:14:35.169 { 00:14:35.169 "name": "BaseBdev3", 00:14:35.169 "uuid": "68b23c5e-7c2a-51d4-977c-359178ac03f7", 00:14:35.169 "is_configured": true, 00:14:35.169 "data_offset": 0, 00:14:35.169 "data_size": 65536 00:14:35.169 }, 00:14:35.169 { 00:14:35.169 "name": "BaseBdev4", 00:14:35.169 "uuid": "c62147a1-810d-53ff-9f1b-89ee77103fbd", 00:14:35.169 "is_configured": true, 00:14:35.169 "data_offset": 0, 00:14:35.169 "data_size": 65536 00:14:35.169 } 00:14:35.169 ] 00:14:35.169 }' 00:14:35.169 01:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.169 01:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.447 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:35.447 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:35.447 01:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.447 01:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.447 [2024-10-15 01:15:48.158719] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.708 01:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.708 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:14:35.708 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.708 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:35.708 01:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.708 01:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.708 01:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.708 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:35.708 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:35.708 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:35.708 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:35.708 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:35.708 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:35.708 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:35.708 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:35.708 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:35.708 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:35.708 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:35.708 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:35.708 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.708 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:35.967 [2024-10-15 01:15:48.442076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:14:35.967 /dev/nbd0 00:14:35.967 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:35.967 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:35.967 01:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:35.967 01:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:35.967 01:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:35.967 01:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:35.967 01:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:35.967 01:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:35.967 01:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:35.967 01:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:35.967 01:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:35.967 1+0 records in 00:14:35.967 1+0 records out 00:14:35.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426812 s, 9.6 MB/s 00:14:35.967 01:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.967 01:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:35.967 01:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.968 01:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:35.968 01:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:35.968 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:35.968 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.968 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:35.968 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:35.968 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:35.968 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:14:36.227 512+0 records in 00:14:36.227 512+0 records out 00:14:36.227 100663296 bytes (101 MB, 96 MiB) copied, 0.408255 s, 247 MB/s 00:14:36.227 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:36.227 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.227 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:36.227 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:36.227 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:36.227 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.227 01:15:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:36.487 [2024-10-15 01:15:49.141330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.487 [2024-10-15 01:15:49.161401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.487 01:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.747 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.747 "name": "raid_bdev1", 00:14:36.747 "uuid": "40556177-b9d4-40a5-8ad2-48e7a5c50ed4", 00:14:36.747 "strip_size_kb": 64, 00:14:36.747 "state": "online", 00:14:36.747 "raid_level": "raid5f", 00:14:36.747 "superblock": false, 00:14:36.747 "num_base_bdevs": 4, 00:14:36.747 "num_base_bdevs_discovered": 3, 00:14:36.747 "num_base_bdevs_operational": 3, 00:14:36.747 "base_bdevs_list": [ 00:14:36.747 { 00:14:36.747 "name": null, 00:14:36.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.747 "is_configured": false, 00:14:36.747 "data_offset": 0, 00:14:36.747 "data_size": 65536 00:14:36.747 }, 00:14:36.747 { 00:14:36.747 "name": "BaseBdev2", 00:14:36.747 "uuid": "960efc4f-015c-5756-a704-0651d009e75f", 00:14:36.747 "is_configured": true, 00:14:36.747 "data_offset": 0, 00:14:36.747 "data_size": 65536 00:14:36.747 }, 00:14:36.747 { 00:14:36.747 "name": "BaseBdev3", 00:14:36.747 "uuid": "68b23c5e-7c2a-51d4-977c-359178ac03f7", 00:14:36.747 "is_configured": true, 00:14:36.747 "data_offset": 0, 00:14:36.747 "data_size": 65536 00:14:36.747 }, 00:14:36.747 { 00:14:36.747 "name": "BaseBdev4", 00:14:36.747 "uuid": "c62147a1-810d-53ff-9f1b-89ee77103fbd", 00:14:36.747 "is_configured": true, 00:14:36.747 "data_offset": 0, 00:14:36.747 "data_size": 65536 00:14:36.747 } 00:14:36.747 ] 00:14:36.747 }' 00:14:36.747 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.747 01:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.007 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:37.007 01:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.007 01:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.007 [2024-10-15 01:15:49.612649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:37.007 [2024-10-15 01:15:49.616973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:14:37.007 01:15:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.007 01:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:37.007 [2024-10-15 01:15:49.619259] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:37.945 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.945 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.945 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.945 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.946 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.946 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.946 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.946 01:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.946 01:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.946 01:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.205 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.205 "name": "raid_bdev1", 00:14:38.205 "uuid": "40556177-b9d4-40a5-8ad2-48e7a5c50ed4", 00:14:38.205 "strip_size_kb": 64, 00:14:38.205 "state": "online", 00:14:38.205 "raid_level": "raid5f", 00:14:38.205 "superblock": false, 00:14:38.206 "num_base_bdevs": 4, 00:14:38.206 "num_base_bdevs_discovered": 4, 00:14:38.206 "num_base_bdevs_operational": 4, 00:14:38.206 "process": { 00:14:38.206 "type": "rebuild", 00:14:38.206 "target": "spare", 00:14:38.206 "progress": { 00:14:38.206 "blocks": 19200, 00:14:38.206 "percent": 9 00:14:38.206 } 00:14:38.206 }, 00:14:38.206 "base_bdevs_list": [ 00:14:38.206 { 00:14:38.206 "name": "spare", 00:14:38.206 "uuid": "4435a569-4595-5fe7-958c-b4944856e5a8", 00:14:38.206 "is_configured": true, 00:14:38.206 "data_offset": 0, 00:14:38.206 "data_size": 65536 00:14:38.206 }, 00:14:38.206 { 00:14:38.206 "name": "BaseBdev2", 00:14:38.206 "uuid": "960efc4f-015c-5756-a704-0651d009e75f", 00:14:38.206 "is_configured": true, 00:14:38.206 "data_offset": 0, 00:14:38.206 "data_size": 65536 00:14:38.206 }, 00:14:38.206 { 00:14:38.206 "name": "BaseBdev3", 00:14:38.206 "uuid": "68b23c5e-7c2a-51d4-977c-359178ac03f7", 00:14:38.206 "is_configured": true, 00:14:38.206 "data_offset": 0, 00:14:38.206 "data_size": 65536 00:14:38.206 }, 00:14:38.206 { 00:14:38.206 "name": "BaseBdev4", 00:14:38.206 "uuid": "c62147a1-810d-53ff-9f1b-89ee77103fbd", 00:14:38.206 "is_configured": true, 00:14:38.206 "data_offset": 0, 00:14:38.206 "data_size": 65536 00:14:38.206 } 00:14:38.206 ] 00:14:38.206 }' 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.206 [2024-10-15 01:15:50.784232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.206 [2024-10-15 01:15:50.827462] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:38.206 [2024-10-15 01:15:50.827586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.206 [2024-10-15 01:15:50.827608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.206 [2024-10-15 01:15:50.827624] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.206 "name": "raid_bdev1", 00:14:38.206 "uuid": "40556177-b9d4-40a5-8ad2-48e7a5c50ed4", 00:14:38.206 "strip_size_kb": 64, 00:14:38.206 "state": "online", 00:14:38.206 "raid_level": "raid5f", 00:14:38.206 "superblock": false, 00:14:38.206 "num_base_bdevs": 4, 00:14:38.206 "num_base_bdevs_discovered": 3, 00:14:38.206 "num_base_bdevs_operational": 3, 00:14:38.206 "base_bdevs_list": [ 00:14:38.206 { 00:14:38.206 "name": null, 00:14:38.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.206 "is_configured": false, 00:14:38.206 "data_offset": 0, 00:14:38.206 "data_size": 65536 00:14:38.206 }, 00:14:38.206 { 00:14:38.206 "name": "BaseBdev2", 00:14:38.206 "uuid": "960efc4f-015c-5756-a704-0651d009e75f", 00:14:38.206 "is_configured": true, 00:14:38.206 "data_offset": 0, 00:14:38.206 "data_size": 65536 00:14:38.206 }, 00:14:38.206 { 00:14:38.206 "name": "BaseBdev3", 00:14:38.206 "uuid": "68b23c5e-7c2a-51d4-977c-359178ac03f7", 00:14:38.206 "is_configured": true, 00:14:38.206 "data_offset": 0, 00:14:38.206 "data_size": 65536 00:14:38.206 }, 00:14:38.206 { 00:14:38.206 "name": "BaseBdev4", 00:14:38.206 "uuid": "c62147a1-810d-53ff-9f1b-89ee77103fbd", 00:14:38.206 "is_configured": true, 00:14:38.206 "data_offset": 0, 00:14:38.206 "data_size": 65536 00:14:38.206 } 00:14:38.206 ] 00:14:38.206 }' 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.206 01:15:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.776 01:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.776 01:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.776 01:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.776 01:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.776 01:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.776 01:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.776 01:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.776 01:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.776 01:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.776 01:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.776 01:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.776 "name": "raid_bdev1", 00:14:38.776 "uuid": "40556177-b9d4-40a5-8ad2-48e7a5c50ed4", 00:14:38.776 "strip_size_kb": 64, 00:14:38.776 "state": "online", 00:14:38.776 "raid_level": "raid5f", 00:14:38.776 "superblock": false, 00:14:38.776 "num_base_bdevs": 4, 00:14:38.776 "num_base_bdevs_discovered": 3, 00:14:38.776 "num_base_bdevs_operational": 3, 00:14:38.776 "base_bdevs_list": [ 00:14:38.776 { 00:14:38.776 "name": null, 00:14:38.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.776 "is_configured": false, 00:14:38.776 "data_offset": 0, 00:14:38.776 "data_size": 65536 00:14:38.776 }, 00:14:38.776 { 00:14:38.776 "name": "BaseBdev2", 00:14:38.776 "uuid": "960efc4f-015c-5756-a704-0651d009e75f", 00:14:38.776 "is_configured": true, 00:14:38.776 "data_offset": 0, 00:14:38.776 "data_size": 65536 00:14:38.776 }, 00:14:38.776 { 00:14:38.776 "name": "BaseBdev3", 00:14:38.776 "uuid": "68b23c5e-7c2a-51d4-977c-359178ac03f7", 00:14:38.776 "is_configured": true, 00:14:38.776 "data_offset": 0, 00:14:38.776 "data_size": 65536 00:14:38.776 }, 00:14:38.776 { 00:14:38.776 "name": "BaseBdev4", 00:14:38.776 "uuid": "c62147a1-810d-53ff-9f1b-89ee77103fbd", 00:14:38.776 "is_configured": true, 00:14:38.776 "data_offset": 0, 00:14:38.776 "data_size": 65536 00:14:38.776 } 00:14:38.776 ] 00:14:38.776 }' 00:14:38.776 01:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.776 01:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.776 01:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.776 01:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.776 01:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:38.776 01:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.776 01:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.776 [2024-10-15 01:15:51.424719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.776 [2024-10-15 01:15:51.429018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027e70 00:14:38.776 01:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.776 01:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:38.776 [2024-10-15 01:15:51.431386] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:39.714 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.714 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.714 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.715 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.715 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.974 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.974 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.974 01:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.974 01:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.974 01:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.974 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.974 "name": "raid_bdev1", 00:14:39.974 "uuid": "40556177-b9d4-40a5-8ad2-48e7a5c50ed4", 00:14:39.974 "strip_size_kb": 64, 00:14:39.974 "state": "online", 00:14:39.974 "raid_level": "raid5f", 00:14:39.974 "superblock": false, 00:14:39.974 "num_base_bdevs": 4, 00:14:39.974 "num_base_bdevs_discovered": 4, 00:14:39.974 "num_base_bdevs_operational": 4, 00:14:39.974 "process": { 00:14:39.974 "type": "rebuild", 00:14:39.974 "target": "spare", 00:14:39.974 "progress": { 00:14:39.974 "blocks": 19200, 00:14:39.974 "percent": 9 00:14:39.974 } 00:14:39.974 }, 00:14:39.974 "base_bdevs_list": [ 00:14:39.974 { 00:14:39.974 "name": "spare", 00:14:39.974 "uuid": "4435a569-4595-5fe7-958c-b4944856e5a8", 00:14:39.974 "is_configured": true, 00:14:39.974 "data_offset": 0, 00:14:39.974 "data_size": 65536 00:14:39.974 }, 00:14:39.974 { 00:14:39.974 "name": "BaseBdev2", 00:14:39.974 "uuid": "960efc4f-015c-5756-a704-0651d009e75f", 00:14:39.974 "is_configured": true, 00:14:39.974 "data_offset": 0, 00:14:39.974 "data_size": 65536 00:14:39.974 }, 00:14:39.974 { 00:14:39.974 "name": "BaseBdev3", 00:14:39.974 "uuid": "68b23c5e-7c2a-51d4-977c-359178ac03f7", 00:14:39.974 "is_configured": true, 00:14:39.974 "data_offset": 0, 00:14:39.974 "data_size": 65536 00:14:39.974 }, 00:14:39.974 { 00:14:39.974 "name": "BaseBdev4", 00:14:39.974 "uuid": "c62147a1-810d-53ff-9f1b-89ee77103fbd", 00:14:39.974 "is_configured": true, 00:14:39.975 "data_offset": 0, 00:14:39.975 "data_size": 65536 00:14:39.975 } 00:14:39.975 ] 00:14:39.975 }' 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=504 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.975 "name": "raid_bdev1", 00:14:39.975 "uuid": "40556177-b9d4-40a5-8ad2-48e7a5c50ed4", 00:14:39.975 "strip_size_kb": 64, 00:14:39.975 "state": "online", 00:14:39.975 "raid_level": "raid5f", 00:14:39.975 "superblock": false, 00:14:39.975 "num_base_bdevs": 4, 00:14:39.975 "num_base_bdevs_discovered": 4, 00:14:39.975 "num_base_bdevs_operational": 4, 00:14:39.975 "process": { 00:14:39.975 "type": "rebuild", 00:14:39.975 "target": "spare", 00:14:39.975 "progress": { 00:14:39.975 "blocks": 21120, 00:14:39.975 "percent": 10 00:14:39.975 } 00:14:39.975 }, 00:14:39.975 "base_bdevs_list": [ 00:14:39.975 { 00:14:39.975 "name": "spare", 00:14:39.975 "uuid": "4435a569-4595-5fe7-958c-b4944856e5a8", 00:14:39.975 "is_configured": true, 00:14:39.975 "data_offset": 0, 00:14:39.975 "data_size": 65536 00:14:39.975 }, 00:14:39.975 { 00:14:39.975 "name": "BaseBdev2", 00:14:39.975 "uuid": "960efc4f-015c-5756-a704-0651d009e75f", 00:14:39.975 "is_configured": true, 00:14:39.975 "data_offset": 0, 00:14:39.975 "data_size": 65536 00:14:39.975 }, 00:14:39.975 { 00:14:39.975 "name": "BaseBdev3", 00:14:39.975 "uuid": "68b23c5e-7c2a-51d4-977c-359178ac03f7", 00:14:39.975 "is_configured": true, 00:14:39.975 "data_offset": 0, 00:14:39.975 "data_size": 65536 00:14:39.975 }, 00:14:39.975 { 00:14:39.975 "name": "BaseBdev4", 00:14:39.975 "uuid": "c62147a1-810d-53ff-9f1b-89ee77103fbd", 00:14:39.975 "is_configured": true, 00:14:39.975 "data_offset": 0, 00:14:39.975 "data_size": 65536 00:14:39.975 } 00:14:39.975 ] 00:14:39.975 }' 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.975 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.235 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.235 01:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:41.175 01:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:41.175 01:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.175 01:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.175 01:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.175 01:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.175 01:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.175 01:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.175 01:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.175 01:15:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.175 01:15:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.175 01:15:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.175 01:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.175 "name": "raid_bdev1", 00:14:41.175 "uuid": "40556177-b9d4-40a5-8ad2-48e7a5c50ed4", 00:14:41.175 "strip_size_kb": 64, 00:14:41.175 "state": "online", 00:14:41.175 "raid_level": "raid5f", 00:14:41.175 "superblock": false, 00:14:41.175 "num_base_bdevs": 4, 00:14:41.175 "num_base_bdevs_discovered": 4, 00:14:41.175 "num_base_bdevs_operational": 4, 00:14:41.175 "process": { 00:14:41.175 "type": "rebuild", 00:14:41.175 "target": "spare", 00:14:41.175 "progress": { 00:14:41.175 "blocks": 42240, 00:14:41.175 "percent": 21 00:14:41.175 } 00:14:41.175 }, 00:14:41.175 "base_bdevs_list": [ 00:14:41.175 { 00:14:41.175 "name": "spare", 00:14:41.175 "uuid": "4435a569-4595-5fe7-958c-b4944856e5a8", 00:14:41.175 "is_configured": true, 00:14:41.175 "data_offset": 0, 00:14:41.175 "data_size": 65536 00:14:41.175 }, 00:14:41.175 { 00:14:41.175 "name": "BaseBdev2", 00:14:41.175 "uuid": "960efc4f-015c-5756-a704-0651d009e75f", 00:14:41.175 "is_configured": true, 00:14:41.175 "data_offset": 0, 00:14:41.175 "data_size": 65536 00:14:41.175 }, 00:14:41.175 { 00:14:41.175 "name": "BaseBdev3", 00:14:41.175 "uuid": "68b23c5e-7c2a-51d4-977c-359178ac03f7", 00:14:41.175 "is_configured": true, 00:14:41.175 "data_offset": 0, 00:14:41.175 "data_size": 65536 00:14:41.175 }, 00:14:41.175 { 00:14:41.175 "name": "BaseBdev4", 00:14:41.175 "uuid": "c62147a1-810d-53ff-9f1b-89ee77103fbd", 00:14:41.175 "is_configured": true, 00:14:41.175 "data_offset": 0, 00:14:41.175 "data_size": 65536 00:14:41.175 } 00:14:41.175 ] 00:14:41.175 }' 00:14:41.175 01:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.175 01:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.175 01:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.175 01:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.175 01:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.556 01:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.556 01:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.556 01:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.557 01:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.557 01:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.557 01:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.557 01:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.557 01:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.557 01:15:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.557 01:15:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.557 01:15:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.557 01:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.557 "name": "raid_bdev1", 00:14:42.557 "uuid": "40556177-b9d4-40a5-8ad2-48e7a5c50ed4", 00:14:42.557 "strip_size_kb": 64, 00:14:42.557 "state": "online", 00:14:42.557 "raid_level": "raid5f", 00:14:42.557 "superblock": false, 00:14:42.557 "num_base_bdevs": 4, 00:14:42.557 "num_base_bdevs_discovered": 4, 00:14:42.557 "num_base_bdevs_operational": 4, 00:14:42.557 "process": { 00:14:42.557 "type": "rebuild", 00:14:42.557 "target": "spare", 00:14:42.557 "progress": { 00:14:42.557 "blocks": 65280, 00:14:42.557 "percent": 33 00:14:42.557 } 00:14:42.557 }, 00:14:42.557 "base_bdevs_list": [ 00:14:42.557 { 00:14:42.557 "name": "spare", 00:14:42.557 "uuid": "4435a569-4595-5fe7-958c-b4944856e5a8", 00:14:42.557 "is_configured": true, 00:14:42.557 "data_offset": 0, 00:14:42.557 "data_size": 65536 00:14:42.557 }, 00:14:42.557 { 00:14:42.557 "name": "BaseBdev2", 00:14:42.557 "uuid": "960efc4f-015c-5756-a704-0651d009e75f", 00:14:42.557 "is_configured": true, 00:14:42.557 "data_offset": 0, 00:14:42.557 "data_size": 65536 00:14:42.557 }, 00:14:42.557 { 00:14:42.557 "name": "BaseBdev3", 00:14:42.557 "uuid": "68b23c5e-7c2a-51d4-977c-359178ac03f7", 00:14:42.557 "is_configured": true, 00:14:42.557 "data_offset": 0, 00:14:42.557 "data_size": 65536 00:14:42.557 }, 00:14:42.557 { 00:14:42.557 "name": "BaseBdev4", 00:14:42.557 "uuid": "c62147a1-810d-53ff-9f1b-89ee77103fbd", 00:14:42.557 "is_configured": true, 00:14:42.557 "data_offset": 0, 00:14:42.557 "data_size": 65536 00:14:42.557 } 00:14:42.557 ] 00:14:42.557 }' 00:14:42.557 01:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.557 01:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.557 01:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.557 01:15:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.557 01:15:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:43.496 01:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:43.496 01:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.496 01:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.496 01:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.496 01:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.496 01:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.496 01:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.496 01:15:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.496 01:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.496 01:15:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.496 01:15:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.496 01:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.496 "name": "raid_bdev1", 00:14:43.496 "uuid": "40556177-b9d4-40a5-8ad2-48e7a5c50ed4", 00:14:43.496 "strip_size_kb": 64, 00:14:43.496 "state": "online", 00:14:43.496 "raid_level": "raid5f", 00:14:43.496 "superblock": false, 00:14:43.496 "num_base_bdevs": 4, 00:14:43.496 "num_base_bdevs_discovered": 4, 00:14:43.496 "num_base_bdevs_operational": 4, 00:14:43.496 "process": { 00:14:43.496 "type": "rebuild", 00:14:43.496 "target": "spare", 00:14:43.496 "progress": { 00:14:43.496 "blocks": 86400, 00:14:43.497 "percent": 43 00:14:43.497 } 00:14:43.497 }, 00:14:43.497 "base_bdevs_list": [ 00:14:43.497 { 00:14:43.497 "name": "spare", 00:14:43.497 "uuid": "4435a569-4595-5fe7-958c-b4944856e5a8", 00:14:43.497 "is_configured": true, 00:14:43.497 "data_offset": 0, 00:14:43.497 "data_size": 65536 00:14:43.497 }, 00:14:43.497 { 00:14:43.497 "name": "BaseBdev2", 00:14:43.497 "uuid": "960efc4f-015c-5756-a704-0651d009e75f", 00:14:43.497 "is_configured": true, 00:14:43.497 "data_offset": 0, 00:14:43.497 "data_size": 65536 00:14:43.497 }, 00:14:43.497 { 00:14:43.497 "name": "BaseBdev3", 00:14:43.497 "uuid": "68b23c5e-7c2a-51d4-977c-359178ac03f7", 00:14:43.497 "is_configured": true, 00:14:43.497 "data_offset": 0, 00:14:43.497 "data_size": 65536 00:14:43.497 }, 00:14:43.497 { 00:14:43.497 "name": "BaseBdev4", 00:14:43.497 "uuid": "c62147a1-810d-53ff-9f1b-89ee77103fbd", 00:14:43.497 "is_configured": true, 00:14:43.497 "data_offset": 0, 00:14:43.497 "data_size": 65536 00:14:43.497 } 00:14:43.497 ] 00:14:43.497 }' 00:14:43.497 01:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.497 01:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.497 01:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.497 01:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.497 01:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:44.437 01:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:44.437 01:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.437 01:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.437 01:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.437 01:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.437 01:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.699 01:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.699 01:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.699 01:15:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.699 01:15:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.699 01:15:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.699 01:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.699 "name": "raid_bdev1", 00:14:44.699 "uuid": "40556177-b9d4-40a5-8ad2-48e7a5c50ed4", 00:14:44.699 "strip_size_kb": 64, 00:14:44.699 "state": "online", 00:14:44.699 "raid_level": "raid5f", 00:14:44.699 "superblock": false, 00:14:44.699 "num_base_bdevs": 4, 00:14:44.699 "num_base_bdevs_discovered": 4, 00:14:44.699 "num_base_bdevs_operational": 4, 00:14:44.699 "process": { 00:14:44.699 "type": "rebuild", 00:14:44.699 "target": "spare", 00:14:44.699 "progress": { 00:14:44.699 "blocks": 107520, 00:14:44.699 "percent": 54 00:14:44.699 } 00:14:44.699 }, 00:14:44.699 "base_bdevs_list": [ 00:14:44.699 { 00:14:44.699 "name": "spare", 00:14:44.699 "uuid": "4435a569-4595-5fe7-958c-b4944856e5a8", 00:14:44.699 "is_configured": true, 00:14:44.699 "data_offset": 0, 00:14:44.699 "data_size": 65536 00:14:44.699 }, 00:14:44.699 { 00:14:44.699 "name": "BaseBdev2", 00:14:44.699 "uuid": "960efc4f-015c-5756-a704-0651d009e75f", 00:14:44.699 "is_configured": true, 00:14:44.699 "data_offset": 0, 00:14:44.699 "data_size": 65536 00:14:44.699 }, 00:14:44.699 { 00:14:44.699 "name": "BaseBdev3", 00:14:44.699 "uuid": "68b23c5e-7c2a-51d4-977c-359178ac03f7", 00:14:44.699 "is_configured": true, 00:14:44.699 "data_offset": 0, 00:14:44.699 "data_size": 65536 00:14:44.699 }, 00:14:44.699 { 00:14:44.699 "name": "BaseBdev4", 00:14:44.699 "uuid": "c62147a1-810d-53ff-9f1b-89ee77103fbd", 00:14:44.699 "is_configured": true, 00:14:44.699 "data_offset": 0, 00:14:44.699 "data_size": 65536 00:14:44.699 } 00:14:44.699 ] 00:14:44.699 }' 00:14:44.699 01:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.699 01:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:44.699 01:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.699 01:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.699 01:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:45.664 01:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:45.664 01:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.664 01:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.664 01:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.664 01:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.664 01:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.664 01:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.664 01:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.664 01:15:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.664 01:15:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.664 01:15:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.664 01:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.664 "name": "raid_bdev1", 00:14:45.664 "uuid": "40556177-b9d4-40a5-8ad2-48e7a5c50ed4", 00:14:45.664 "strip_size_kb": 64, 00:14:45.664 "state": "online", 00:14:45.664 "raid_level": "raid5f", 00:14:45.664 "superblock": false, 00:14:45.664 "num_base_bdevs": 4, 00:14:45.664 "num_base_bdevs_discovered": 4, 00:14:45.664 "num_base_bdevs_operational": 4, 00:14:45.664 "process": { 00:14:45.664 "type": "rebuild", 00:14:45.664 "target": "spare", 00:14:45.664 "progress": { 00:14:45.664 "blocks": 130560, 00:14:45.664 "percent": 66 00:14:45.664 } 00:14:45.664 }, 00:14:45.664 "base_bdevs_list": [ 00:14:45.664 { 00:14:45.664 "name": "spare", 00:14:45.664 "uuid": "4435a569-4595-5fe7-958c-b4944856e5a8", 00:14:45.664 "is_configured": true, 00:14:45.664 "data_offset": 0, 00:14:45.664 "data_size": 65536 00:14:45.664 }, 00:14:45.664 { 00:14:45.664 "name": "BaseBdev2", 00:14:45.664 "uuid": "960efc4f-015c-5756-a704-0651d009e75f", 00:14:45.664 "is_configured": true, 00:14:45.664 "data_offset": 0, 00:14:45.664 "data_size": 65536 00:14:45.664 }, 00:14:45.664 { 00:14:45.664 "name": "BaseBdev3", 00:14:45.664 "uuid": "68b23c5e-7c2a-51d4-977c-359178ac03f7", 00:14:45.664 "is_configured": true, 00:14:45.664 "data_offset": 0, 00:14:45.664 "data_size": 65536 00:14:45.664 }, 00:14:45.664 { 00:14:45.664 "name": "BaseBdev4", 00:14:45.664 "uuid": "c62147a1-810d-53ff-9f1b-89ee77103fbd", 00:14:45.664 "is_configured": true, 00:14:45.664 "data_offset": 0, 00:14:45.664 "data_size": 65536 00:14:45.664 } 00:14:45.664 ] 00:14:45.664 }' 00:14:45.664 01:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.924 01:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.924 01:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.924 01:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.924 01:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:46.864 01:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:46.864 01:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.864 01:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.864 01:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.864 01:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.864 01:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.864 01:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.864 01:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.864 01:15:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.864 01:15:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.864 01:15:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.864 01:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.864 "name": "raid_bdev1", 00:14:46.864 "uuid": "40556177-b9d4-40a5-8ad2-48e7a5c50ed4", 00:14:46.864 "strip_size_kb": 64, 00:14:46.864 "state": "online", 00:14:46.864 "raid_level": "raid5f", 00:14:46.864 "superblock": false, 00:14:46.864 "num_base_bdevs": 4, 00:14:46.864 "num_base_bdevs_discovered": 4, 00:14:46.864 "num_base_bdevs_operational": 4, 00:14:46.864 "process": { 00:14:46.864 "type": "rebuild", 00:14:46.864 "target": "spare", 00:14:46.864 "progress": { 00:14:46.864 "blocks": 153600, 00:14:46.864 "percent": 78 00:14:46.864 } 00:14:46.864 }, 00:14:46.864 "base_bdevs_list": [ 00:14:46.864 { 00:14:46.864 "name": "spare", 00:14:46.864 "uuid": "4435a569-4595-5fe7-958c-b4944856e5a8", 00:14:46.864 "is_configured": true, 00:14:46.864 "data_offset": 0, 00:14:46.864 "data_size": 65536 00:14:46.864 }, 00:14:46.864 { 00:14:46.864 "name": "BaseBdev2", 00:14:46.864 "uuid": "960efc4f-015c-5756-a704-0651d009e75f", 00:14:46.864 "is_configured": true, 00:14:46.864 "data_offset": 0, 00:14:46.864 "data_size": 65536 00:14:46.864 }, 00:14:46.864 { 00:14:46.864 "name": "BaseBdev3", 00:14:46.865 "uuid": "68b23c5e-7c2a-51d4-977c-359178ac03f7", 00:14:46.865 "is_configured": true, 00:14:46.865 "data_offset": 0, 00:14:46.865 "data_size": 65536 00:14:46.865 }, 00:14:46.865 { 00:14:46.865 "name": "BaseBdev4", 00:14:46.865 "uuid": "c62147a1-810d-53ff-9f1b-89ee77103fbd", 00:14:46.865 "is_configured": true, 00:14:46.865 "data_offset": 0, 00:14:46.865 "data_size": 65536 00:14:46.865 } 00:14:46.865 ] 00:14:46.865 }' 00:14:46.865 01:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.125 01:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.125 01:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.125 01:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.125 01:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:48.065 01:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:48.065 01:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.065 01:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.065 01:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.065 01:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.065 01:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.065 01:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.065 01:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.065 01:16:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.065 01:16:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.065 01:16:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.065 01:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.065 "name": "raid_bdev1", 00:14:48.065 "uuid": "40556177-b9d4-40a5-8ad2-48e7a5c50ed4", 00:14:48.065 "strip_size_kb": 64, 00:14:48.065 "state": "online", 00:14:48.065 "raid_level": "raid5f", 00:14:48.065 "superblock": false, 00:14:48.065 "num_base_bdevs": 4, 00:14:48.065 "num_base_bdevs_discovered": 4, 00:14:48.065 "num_base_bdevs_operational": 4, 00:14:48.065 "process": { 00:14:48.065 "type": "rebuild", 00:14:48.065 "target": "spare", 00:14:48.065 "progress": { 00:14:48.065 "blocks": 174720, 00:14:48.065 "percent": 88 00:14:48.065 } 00:14:48.065 }, 00:14:48.065 "base_bdevs_list": [ 00:14:48.065 { 00:14:48.065 "name": "spare", 00:14:48.065 "uuid": "4435a569-4595-5fe7-958c-b4944856e5a8", 00:14:48.065 "is_configured": true, 00:14:48.065 "data_offset": 0, 00:14:48.065 "data_size": 65536 00:14:48.065 }, 00:14:48.065 { 00:14:48.065 "name": "BaseBdev2", 00:14:48.065 "uuid": "960efc4f-015c-5756-a704-0651d009e75f", 00:14:48.065 "is_configured": true, 00:14:48.065 "data_offset": 0, 00:14:48.065 "data_size": 65536 00:14:48.065 }, 00:14:48.065 { 00:14:48.065 "name": "BaseBdev3", 00:14:48.065 "uuid": "68b23c5e-7c2a-51d4-977c-359178ac03f7", 00:14:48.065 "is_configured": true, 00:14:48.065 "data_offset": 0, 00:14:48.065 "data_size": 65536 00:14:48.065 }, 00:14:48.065 { 00:14:48.065 "name": "BaseBdev4", 00:14:48.065 "uuid": "c62147a1-810d-53ff-9f1b-89ee77103fbd", 00:14:48.065 "is_configured": true, 00:14:48.065 "data_offset": 0, 00:14:48.065 "data_size": 65536 00:14:48.065 } 00:14:48.065 ] 00:14:48.065 }' 00:14:48.065 01:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.065 01:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.065 01:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.325 01:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.325 01:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:49.265 [2024-10-15 01:16:01.799021] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:49.265 [2024-10-15 01:16:01.799233] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:49.265 [2024-10-15 01:16:01.799284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.265 "name": "raid_bdev1", 00:14:49.265 "uuid": "40556177-b9d4-40a5-8ad2-48e7a5c50ed4", 00:14:49.265 "strip_size_kb": 64, 00:14:49.265 "state": "online", 00:14:49.265 "raid_level": "raid5f", 00:14:49.265 "superblock": false, 00:14:49.265 "num_base_bdevs": 4, 00:14:49.265 "num_base_bdevs_discovered": 4, 00:14:49.265 "num_base_bdevs_operational": 4, 00:14:49.265 "base_bdevs_list": [ 00:14:49.265 { 00:14:49.265 "name": "spare", 00:14:49.265 "uuid": "4435a569-4595-5fe7-958c-b4944856e5a8", 00:14:49.265 "is_configured": true, 00:14:49.265 "data_offset": 0, 00:14:49.265 "data_size": 65536 00:14:49.265 }, 00:14:49.265 { 00:14:49.265 "name": "BaseBdev2", 00:14:49.265 "uuid": "960efc4f-015c-5756-a704-0651d009e75f", 00:14:49.265 "is_configured": true, 00:14:49.265 "data_offset": 0, 00:14:49.265 "data_size": 65536 00:14:49.265 }, 00:14:49.265 { 00:14:49.265 "name": "BaseBdev3", 00:14:49.265 "uuid": "68b23c5e-7c2a-51d4-977c-359178ac03f7", 00:14:49.265 "is_configured": true, 00:14:49.265 "data_offset": 0, 00:14:49.265 "data_size": 65536 00:14:49.265 }, 00:14:49.265 { 00:14:49.265 "name": "BaseBdev4", 00:14:49.265 "uuid": "c62147a1-810d-53ff-9f1b-89ee77103fbd", 00:14:49.265 "is_configured": true, 00:14:49.265 "data_offset": 0, 00:14:49.265 "data_size": 65536 00:14:49.265 } 00:14:49.265 ] 00:14:49.265 }' 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.265 01:16:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.525 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.525 "name": "raid_bdev1", 00:14:49.525 "uuid": "40556177-b9d4-40a5-8ad2-48e7a5c50ed4", 00:14:49.525 "strip_size_kb": 64, 00:14:49.525 "state": "online", 00:14:49.525 "raid_level": "raid5f", 00:14:49.525 "superblock": false, 00:14:49.525 "num_base_bdevs": 4, 00:14:49.525 "num_base_bdevs_discovered": 4, 00:14:49.525 "num_base_bdevs_operational": 4, 00:14:49.525 "base_bdevs_list": [ 00:14:49.525 { 00:14:49.525 "name": "spare", 00:14:49.525 "uuid": "4435a569-4595-5fe7-958c-b4944856e5a8", 00:14:49.525 "is_configured": true, 00:14:49.525 "data_offset": 0, 00:14:49.525 "data_size": 65536 00:14:49.525 }, 00:14:49.525 { 00:14:49.525 "name": "BaseBdev2", 00:14:49.525 "uuid": "960efc4f-015c-5756-a704-0651d009e75f", 00:14:49.525 "is_configured": true, 00:14:49.525 "data_offset": 0, 00:14:49.525 "data_size": 65536 00:14:49.525 }, 00:14:49.525 { 00:14:49.525 "name": "BaseBdev3", 00:14:49.525 "uuid": "68b23c5e-7c2a-51d4-977c-359178ac03f7", 00:14:49.525 "is_configured": true, 00:14:49.525 "data_offset": 0, 00:14:49.525 "data_size": 65536 00:14:49.525 }, 00:14:49.525 { 00:14:49.525 "name": "BaseBdev4", 00:14:49.525 "uuid": "c62147a1-810d-53ff-9f1b-89ee77103fbd", 00:14:49.525 "is_configured": true, 00:14:49.525 "data_offset": 0, 00:14:49.525 "data_size": 65536 00:14:49.525 } 00:14:49.525 ] 00:14:49.525 }' 00:14:49.525 01:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.525 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:49.525 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.525 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:49.525 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:49.525 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.525 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.525 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.525 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.525 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.525 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.525 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.525 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.525 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.525 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.525 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.525 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.525 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.525 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.525 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.525 "name": "raid_bdev1", 00:14:49.525 "uuid": "40556177-b9d4-40a5-8ad2-48e7a5c50ed4", 00:14:49.525 "strip_size_kb": 64, 00:14:49.525 "state": "online", 00:14:49.525 "raid_level": "raid5f", 00:14:49.525 "superblock": false, 00:14:49.525 "num_base_bdevs": 4, 00:14:49.525 "num_base_bdevs_discovered": 4, 00:14:49.525 "num_base_bdevs_operational": 4, 00:14:49.525 "base_bdevs_list": [ 00:14:49.525 { 00:14:49.525 "name": "spare", 00:14:49.525 "uuid": "4435a569-4595-5fe7-958c-b4944856e5a8", 00:14:49.525 "is_configured": true, 00:14:49.525 "data_offset": 0, 00:14:49.525 "data_size": 65536 00:14:49.525 }, 00:14:49.525 { 00:14:49.525 "name": "BaseBdev2", 00:14:49.525 "uuid": "960efc4f-015c-5756-a704-0651d009e75f", 00:14:49.525 "is_configured": true, 00:14:49.525 "data_offset": 0, 00:14:49.525 "data_size": 65536 00:14:49.525 }, 00:14:49.525 { 00:14:49.525 "name": "BaseBdev3", 00:14:49.525 "uuid": "68b23c5e-7c2a-51d4-977c-359178ac03f7", 00:14:49.525 "is_configured": true, 00:14:49.525 "data_offset": 0, 00:14:49.525 "data_size": 65536 00:14:49.525 }, 00:14:49.525 { 00:14:49.525 "name": "BaseBdev4", 00:14:49.525 "uuid": "c62147a1-810d-53ff-9f1b-89ee77103fbd", 00:14:49.525 "is_configured": true, 00:14:49.525 "data_offset": 0, 00:14:49.525 "data_size": 65536 00:14:49.525 } 00:14:49.525 ] 00:14:49.525 }' 00:14:49.525 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.525 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.096 [2024-10-15 01:16:02.535827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:50.096 [2024-10-15 01:16:02.535938] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.096 [2024-10-15 01:16:02.536035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.096 [2024-10-15 01:16:02.536138] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.096 [2024-10-15 01:16:02.536162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:50.096 /dev/nbd0 00:14:50.096 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:50.356 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:50.356 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:50.356 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:50.356 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:50.356 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:50.356 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:50.356 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:50.356 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:50.356 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:50.356 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:50.356 1+0 records in 00:14:50.356 1+0 records out 00:14:50.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048543 s, 8.4 MB/s 00:14:50.356 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.356 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:50.356 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.356 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:50.356 01:16:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:50.356 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:50.356 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:50.356 01:16:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:50.356 /dev/nbd1 00:14:50.356 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:50.616 1+0 records in 00:14:50.616 1+0 records out 00:14:50.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618864 s, 6.6 MB/s 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:50.616 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:50.876 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:50.876 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:50.876 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:50.876 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:50.876 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:50.876 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:50.876 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:50.877 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:50.877 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:50.877 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:51.137 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:51.137 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:51.137 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:51.137 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:51.137 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:51.137 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:51.137 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:51.137 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:51.137 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:51.137 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 94754 00:14:51.137 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 94754 ']' 00:14:51.137 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 94754 00:14:51.137 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:14:51.137 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:51.137 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94754 00:14:51.137 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:51.137 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:51.137 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94754' 00:14:51.137 killing process with pid 94754 00:14:51.137 Received shutdown signal, test time was about 60.000000 seconds 00:14:51.137 00:14:51.137 Latency(us) 00:14:51.137 [2024-10-15T01:16:03.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.137 [2024-10-15T01:16:03.861Z] =================================================================================================================== 00:14:51.137 [2024-10-15T01:16:03.861Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:51.137 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 94754 00:14:51.137 [2024-10-15 01:16:03.688602] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:51.137 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 94754 00:14:51.137 [2024-10-15 01:16:03.740119] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:51.397 01:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:51.397 00:14:51.397 real 0m17.343s 00:14:51.397 user 0m21.250s 00:14:51.397 sys 0m2.232s 00:14:51.397 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:51.397 01:16:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.397 ************************************ 00:14:51.397 END TEST raid5f_rebuild_test 00:14:51.397 ************************************ 00:14:51.397 01:16:03 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:14:51.397 01:16:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:51.397 01:16:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:51.397 01:16:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:51.397 ************************************ 00:14:51.397 START TEST raid5f_rebuild_test_sb 00:14:51.397 ************************************ 00:14:51.397 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:14:51.397 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:51.397 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:51.397 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:51.397 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:51.397 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:51.397 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:51.397 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:51.397 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:51.397 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:51.397 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:51.397 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:51.397 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:51.397 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:51.397 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:51.397 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:51.397 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:51.397 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:51.397 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95237 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95237 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 95237 ']' 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:51.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:51.398 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.398 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:51.398 Zero copy mechanism will not be used. 00:14:51.398 [2024-10-15 01:16:04.109558] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:14:51.398 [2024-10-15 01:16:04.109678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95237 ] 00:14:51.658 [2024-10-15 01:16:04.254401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.658 [2024-10-15 01:16:04.283337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.658 [2024-10-15 01:16:04.325707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.658 [2024-10-15 01:16:04.325746] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.228 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:52.228 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:52.228 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:52.228 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:52.228 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.228 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.490 BaseBdev1_malloc 00:14:52.490 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.491 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:52.491 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.491 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.491 [2024-10-15 01:16:04.960493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:52.491 [2024-10-15 01:16:04.960583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.491 [2024-10-15 01:16:04.960609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:52.491 [2024-10-15 01:16:04.960621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.491 [2024-10-15 01:16:04.962793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.491 [2024-10-15 01:16:04.962892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:52.491 BaseBdev1 00:14:52.491 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.491 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:52.491 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:52.491 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.491 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.491 BaseBdev2_malloc 00:14:52.491 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.491 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:52.491 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.491 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.491 [2024-10-15 01:16:04.989206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:52.491 [2024-10-15 01:16:04.989268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.491 [2024-10-15 01:16:04.989287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:52.491 [2024-10-15 01:16:04.989296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.491 [2024-10-15 01:16:04.991382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.491 [2024-10-15 01:16:04.991425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:52.491 BaseBdev2 00:14:52.491 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.491 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:52.491 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:52.491 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.491 01:16:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.491 BaseBdev3_malloc 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.491 [2024-10-15 01:16:05.017944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:52.491 [2024-10-15 01:16:05.018012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.491 [2024-10-15 01:16:05.018036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:52.491 [2024-10-15 01:16:05.018045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.491 [2024-10-15 01:16:05.020205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.491 [2024-10-15 01:16:05.020239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:52.491 BaseBdev3 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.491 BaseBdev4_malloc 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.491 [2024-10-15 01:16:05.056545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:52.491 [2024-10-15 01:16:05.056694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.491 [2024-10-15 01:16:05.056730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:52.491 [2024-10-15 01:16:05.056741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.491 [2024-10-15 01:16:05.059020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.491 [2024-10-15 01:16:05.059056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:52.491 BaseBdev4 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.491 spare_malloc 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.491 spare_delay 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.491 [2024-10-15 01:16:05.097301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:52.491 [2024-10-15 01:16:05.097363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.491 [2024-10-15 01:16:05.097387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:52.491 [2024-10-15 01:16:05.097396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.491 [2024-10-15 01:16:05.099453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.491 [2024-10-15 01:16:05.099563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:52.491 spare 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.491 [2024-10-15 01:16:05.109363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.491 [2024-10-15 01:16:05.111199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.491 [2024-10-15 01:16:05.111260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.491 [2024-10-15 01:16:05.111308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:52.491 [2024-10-15 01:16:05.111486] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:52.491 [2024-10-15 01:16:05.111497] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:52.491 [2024-10-15 01:16:05.111761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:52.491 [2024-10-15 01:16:05.112248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:52.491 [2024-10-15 01:16:05.112263] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:52.491 [2024-10-15 01:16:05.112381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.491 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.491 "name": "raid_bdev1", 00:14:52.491 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:14:52.491 "strip_size_kb": 64, 00:14:52.491 "state": "online", 00:14:52.491 "raid_level": "raid5f", 00:14:52.491 "superblock": true, 00:14:52.491 "num_base_bdevs": 4, 00:14:52.491 "num_base_bdevs_discovered": 4, 00:14:52.491 "num_base_bdevs_operational": 4, 00:14:52.491 "base_bdevs_list": [ 00:14:52.491 { 00:14:52.491 "name": "BaseBdev1", 00:14:52.491 "uuid": "b2499151-b7e1-53df-80ff-2b15b0eecea3", 00:14:52.491 "is_configured": true, 00:14:52.491 "data_offset": 2048, 00:14:52.491 "data_size": 63488 00:14:52.491 }, 00:14:52.492 { 00:14:52.492 "name": "BaseBdev2", 00:14:52.492 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:14:52.492 "is_configured": true, 00:14:52.492 "data_offset": 2048, 00:14:52.492 "data_size": 63488 00:14:52.492 }, 00:14:52.492 { 00:14:52.492 "name": "BaseBdev3", 00:14:52.492 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:14:52.492 "is_configured": true, 00:14:52.492 "data_offset": 2048, 00:14:52.492 "data_size": 63488 00:14:52.492 }, 00:14:52.492 { 00:14:52.492 "name": "BaseBdev4", 00:14:52.492 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:14:52.492 "is_configured": true, 00:14:52.492 "data_offset": 2048, 00:14:52.492 "data_size": 63488 00:14:52.492 } 00:14:52.492 ] 00:14:52.492 }' 00:14:52.492 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.492 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:53.062 [2024-10-15 01:16:05.565561] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:53.062 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:53.322 [2024-10-15 01:16:05.848920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:14:53.322 /dev/nbd0 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:53.322 1+0 records in 00:14:53.322 1+0 records out 00:14:53.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511712 s, 8.0 MB/s 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:53.322 01:16:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:14:53.892 496+0 records in 00:14:53.892 496+0 records out 00:14:53.892 97517568 bytes (98 MB, 93 MiB) copied, 0.401397 s, 243 MB/s 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:53.892 [2024-10-15 01:16:06.541586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.892 [2024-10-15 01:16:06.557652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.892 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.152 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.152 "name": "raid_bdev1", 00:14:54.152 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:14:54.152 "strip_size_kb": 64, 00:14:54.152 "state": "online", 00:14:54.152 "raid_level": "raid5f", 00:14:54.152 "superblock": true, 00:14:54.152 "num_base_bdevs": 4, 00:14:54.152 "num_base_bdevs_discovered": 3, 00:14:54.152 "num_base_bdevs_operational": 3, 00:14:54.152 "base_bdevs_list": [ 00:14:54.152 { 00:14:54.152 "name": null, 00:14:54.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.152 "is_configured": false, 00:14:54.152 "data_offset": 0, 00:14:54.152 "data_size": 63488 00:14:54.152 }, 00:14:54.152 { 00:14:54.152 "name": "BaseBdev2", 00:14:54.152 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:14:54.152 "is_configured": true, 00:14:54.152 "data_offset": 2048, 00:14:54.152 "data_size": 63488 00:14:54.152 }, 00:14:54.152 { 00:14:54.152 "name": "BaseBdev3", 00:14:54.152 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:14:54.152 "is_configured": true, 00:14:54.152 "data_offset": 2048, 00:14:54.152 "data_size": 63488 00:14:54.152 }, 00:14:54.152 { 00:14:54.152 "name": "BaseBdev4", 00:14:54.152 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:14:54.152 "is_configured": true, 00:14:54.152 "data_offset": 2048, 00:14:54.152 "data_size": 63488 00:14:54.152 } 00:14:54.152 ] 00:14:54.152 }' 00:14:54.152 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.152 01:16:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.412 01:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:54.412 01:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.412 01:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.412 [2024-10-15 01:16:07.036880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:54.412 [2024-10-15 01:16:07.041390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000270a0 00:14:54.412 01:16:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.412 01:16:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:54.412 [2024-10-15 01:16:07.043735] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:55.352 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.352 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.352 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.352 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.352 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.352 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.352 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.352 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.352 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.352 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.612 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.612 "name": "raid_bdev1", 00:14:55.612 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:14:55.612 "strip_size_kb": 64, 00:14:55.612 "state": "online", 00:14:55.612 "raid_level": "raid5f", 00:14:55.612 "superblock": true, 00:14:55.612 "num_base_bdevs": 4, 00:14:55.612 "num_base_bdevs_discovered": 4, 00:14:55.612 "num_base_bdevs_operational": 4, 00:14:55.612 "process": { 00:14:55.612 "type": "rebuild", 00:14:55.612 "target": "spare", 00:14:55.612 "progress": { 00:14:55.612 "blocks": 19200, 00:14:55.612 "percent": 10 00:14:55.612 } 00:14:55.612 }, 00:14:55.612 "base_bdevs_list": [ 00:14:55.612 { 00:14:55.612 "name": "spare", 00:14:55.612 "uuid": "edf072ef-36a3-5947-bc11-9b521ceafbfe", 00:14:55.612 "is_configured": true, 00:14:55.612 "data_offset": 2048, 00:14:55.612 "data_size": 63488 00:14:55.612 }, 00:14:55.612 { 00:14:55.613 "name": "BaseBdev2", 00:14:55.613 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:14:55.613 "is_configured": true, 00:14:55.613 "data_offset": 2048, 00:14:55.613 "data_size": 63488 00:14:55.613 }, 00:14:55.613 { 00:14:55.613 "name": "BaseBdev3", 00:14:55.613 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:14:55.613 "is_configured": true, 00:14:55.613 "data_offset": 2048, 00:14:55.613 "data_size": 63488 00:14:55.613 }, 00:14:55.613 { 00:14:55.613 "name": "BaseBdev4", 00:14:55.613 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:14:55.613 "is_configured": true, 00:14:55.613 "data_offset": 2048, 00:14:55.613 "data_size": 63488 00:14:55.613 } 00:14:55.613 ] 00:14:55.613 }' 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.613 [2024-10-15 01:16:08.204268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:55.613 [2024-10-15 01:16:08.252174] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:55.613 [2024-10-15 01:16:08.252347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.613 [2024-10-15 01:16:08.252388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:55.613 [2024-10-15 01:16:08.252425] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.613 "name": "raid_bdev1", 00:14:55.613 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:14:55.613 "strip_size_kb": 64, 00:14:55.613 "state": "online", 00:14:55.613 "raid_level": "raid5f", 00:14:55.613 "superblock": true, 00:14:55.613 "num_base_bdevs": 4, 00:14:55.613 "num_base_bdevs_discovered": 3, 00:14:55.613 "num_base_bdevs_operational": 3, 00:14:55.613 "base_bdevs_list": [ 00:14:55.613 { 00:14:55.613 "name": null, 00:14:55.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.613 "is_configured": false, 00:14:55.613 "data_offset": 0, 00:14:55.613 "data_size": 63488 00:14:55.613 }, 00:14:55.613 { 00:14:55.613 "name": "BaseBdev2", 00:14:55.613 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:14:55.613 "is_configured": true, 00:14:55.613 "data_offset": 2048, 00:14:55.613 "data_size": 63488 00:14:55.613 }, 00:14:55.613 { 00:14:55.613 "name": "BaseBdev3", 00:14:55.613 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:14:55.613 "is_configured": true, 00:14:55.613 "data_offset": 2048, 00:14:55.613 "data_size": 63488 00:14:55.613 }, 00:14:55.613 { 00:14:55.613 "name": "BaseBdev4", 00:14:55.613 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:14:55.613 "is_configured": true, 00:14:55.613 "data_offset": 2048, 00:14:55.613 "data_size": 63488 00:14:55.613 } 00:14:55.613 ] 00:14:55.613 }' 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.613 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.183 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:56.183 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.183 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:56.183 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:56.183 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.183 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.183 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.183 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.183 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.183 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.183 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.183 "name": "raid_bdev1", 00:14:56.183 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:14:56.183 "strip_size_kb": 64, 00:14:56.183 "state": "online", 00:14:56.183 "raid_level": "raid5f", 00:14:56.183 "superblock": true, 00:14:56.183 "num_base_bdevs": 4, 00:14:56.183 "num_base_bdevs_discovered": 3, 00:14:56.183 "num_base_bdevs_operational": 3, 00:14:56.183 "base_bdevs_list": [ 00:14:56.183 { 00:14:56.183 "name": null, 00:14:56.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.183 "is_configured": false, 00:14:56.183 "data_offset": 0, 00:14:56.183 "data_size": 63488 00:14:56.183 }, 00:14:56.183 { 00:14:56.183 "name": "BaseBdev2", 00:14:56.183 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:14:56.183 "is_configured": true, 00:14:56.183 "data_offset": 2048, 00:14:56.183 "data_size": 63488 00:14:56.183 }, 00:14:56.183 { 00:14:56.183 "name": "BaseBdev3", 00:14:56.183 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:14:56.183 "is_configured": true, 00:14:56.183 "data_offset": 2048, 00:14:56.183 "data_size": 63488 00:14:56.183 }, 00:14:56.183 { 00:14:56.183 "name": "BaseBdev4", 00:14:56.183 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:14:56.183 "is_configured": true, 00:14:56.183 "data_offset": 2048, 00:14:56.183 "data_size": 63488 00:14:56.183 } 00:14:56.183 ] 00:14:56.183 }' 00:14:56.183 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.183 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:56.183 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.183 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:56.183 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:56.183 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.183 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.183 [2024-10-15 01:16:08.869392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:56.183 [2024-10-15 01:16:08.873668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027170 00:14:56.183 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.183 01:16:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:56.183 [2024-10-15 01:16:08.875898] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:57.565 01:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.565 01:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.565 01:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.565 01:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.565 01:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.565 01:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.565 01:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.565 01:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.565 01:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.565 01:16:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.565 01:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.565 "name": "raid_bdev1", 00:14:57.565 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:14:57.565 "strip_size_kb": 64, 00:14:57.565 "state": "online", 00:14:57.565 "raid_level": "raid5f", 00:14:57.565 "superblock": true, 00:14:57.565 "num_base_bdevs": 4, 00:14:57.565 "num_base_bdevs_discovered": 4, 00:14:57.565 "num_base_bdevs_operational": 4, 00:14:57.565 "process": { 00:14:57.565 "type": "rebuild", 00:14:57.565 "target": "spare", 00:14:57.565 "progress": { 00:14:57.565 "blocks": 19200, 00:14:57.565 "percent": 10 00:14:57.565 } 00:14:57.565 }, 00:14:57.565 "base_bdevs_list": [ 00:14:57.565 { 00:14:57.565 "name": "spare", 00:14:57.565 "uuid": "edf072ef-36a3-5947-bc11-9b521ceafbfe", 00:14:57.565 "is_configured": true, 00:14:57.565 "data_offset": 2048, 00:14:57.566 "data_size": 63488 00:14:57.566 }, 00:14:57.566 { 00:14:57.566 "name": "BaseBdev2", 00:14:57.566 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:14:57.566 "is_configured": true, 00:14:57.566 "data_offset": 2048, 00:14:57.566 "data_size": 63488 00:14:57.566 }, 00:14:57.566 { 00:14:57.566 "name": "BaseBdev3", 00:14:57.566 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:14:57.566 "is_configured": true, 00:14:57.566 "data_offset": 2048, 00:14:57.566 "data_size": 63488 00:14:57.566 }, 00:14:57.566 { 00:14:57.566 "name": "BaseBdev4", 00:14:57.566 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:14:57.566 "is_configured": true, 00:14:57.566 "data_offset": 2048, 00:14:57.566 "data_size": 63488 00:14:57.566 } 00:14:57.566 ] 00:14:57.566 }' 00:14:57.566 01:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.566 01:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:57.566 01:16:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:57.566 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=522 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.566 "name": "raid_bdev1", 00:14:57.566 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:14:57.566 "strip_size_kb": 64, 00:14:57.566 "state": "online", 00:14:57.566 "raid_level": "raid5f", 00:14:57.566 "superblock": true, 00:14:57.566 "num_base_bdevs": 4, 00:14:57.566 "num_base_bdevs_discovered": 4, 00:14:57.566 "num_base_bdevs_operational": 4, 00:14:57.566 "process": { 00:14:57.566 "type": "rebuild", 00:14:57.566 "target": "spare", 00:14:57.566 "progress": { 00:14:57.566 "blocks": 21120, 00:14:57.566 "percent": 11 00:14:57.566 } 00:14:57.566 }, 00:14:57.566 "base_bdevs_list": [ 00:14:57.566 { 00:14:57.566 "name": "spare", 00:14:57.566 "uuid": "edf072ef-36a3-5947-bc11-9b521ceafbfe", 00:14:57.566 "is_configured": true, 00:14:57.566 "data_offset": 2048, 00:14:57.566 "data_size": 63488 00:14:57.566 }, 00:14:57.566 { 00:14:57.566 "name": "BaseBdev2", 00:14:57.566 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:14:57.566 "is_configured": true, 00:14:57.566 "data_offset": 2048, 00:14:57.566 "data_size": 63488 00:14:57.566 }, 00:14:57.566 { 00:14:57.566 "name": "BaseBdev3", 00:14:57.566 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:14:57.566 "is_configured": true, 00:14:57.566 "data_offset": 2048, 00:14:57.566 "data_size": 63488 00:14:57.566 }, 00:14:57.566 { 00:14:57.566 "name": "BaseBdev4", 00:14:57.566 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:14:57.566 "is_configured": true, 00:14:57.566 "data_offset": 2048, 00:14:57.566 "data_size": 63488 00:14:57.566 } 00:14:57.566 ] 00:14:57.566 }' 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.566 01:16:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:58.506 01:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:58.506 01:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.506 01:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.506 01:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.506 01:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.506 01:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.506 01:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.506 01:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.506 01:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.506 01:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.506 01:16:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.766 01:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.766 "name": "raid_bdev1", 00:14:58.766 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:14:58.766 "strip_size_kb": 64, 00:14:58.766 "state": "online", 00:14:58.766 "raid_level": "raid5f", 00:14:58.766 "superblock": true, 00:14:58.766 "num_base_bdevs": 4, 00:14:58.766 "num_base_bdevs_discovered": 4, 00:14:58.766 "num_base_bdevs_operational": 4, 00:14:58.766 "process": { 00:14:58.766 "type": "rebuild", 00:14:58.766 "target": "spare", 00:14:58.766 "progress": { 00:14:58.766 "blocks": 44160, 00:14:58.766 "percent": 23 00:14:58.766 } 00:14:58.766 }, 00:14:58.766 "base_bdevs_list": [ 00:14:58.766 { 00:14:58.766 "name": "spare", 00:14:58.766 "uuid": "edf072ef-36a3-5947-bc11-9b521ceafbfe", 00:14:58.766 "is_configured": true, 00:14:58.766 "data_offset": 2048, 00:14:58.766 "data_size": 63488 00:14:58.766 }, 00:14:58.766 { 00:14:58.766 "name": "BaseBdev2", 00:14:58.766 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:14:58.766 "is_configured": true, 00:14:58.766 "data_offset": 2048, 00:14:58.766 "data_size": 63488 00:14:58.766 }, 00:14:58.766 { 00:14:58.766 "name": "BaseBdev3", 00:14:58.766 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:14:58.766 "is_configured": true, 00:14:58.766 "data_offset": 2048, 00:14:58.766 "data_size": 63488 00:14:58.766 }, 00:14:58.766 { 00:14:58.766 "name": "BaseBdev4", 00:14:58.766 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:14:58.766 "is_configured": true, 00:14:58.766 "data_offset": 2048, 00:14:58.766 "data_size": 63488 00:14:58.766 } 00:14:58.766 ] 00:14:58.766 }' 00:14:58.766 01:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.766 01:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.766 01:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.766 01:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.766 01:16:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:59.705 01:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:59.705 01:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.705 01:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.705 01:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.705 01:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.705 01:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.705 01:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.705 01:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.705 01:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.705 01:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.705 01:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.705 01:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.705 "name": "raid_bdev1", 00:14:59.705 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:14:59.705 "strip_size_kb": 64, 00:14:59.705 "state": "online", 00:14:59.705 "raid_level": "raid5f", 00:14:59.705 "superblock": true, 00:14:59.705 "num_base_bdevs": 4, 00:14:59.705 "num_base_bdevs_discovered": 4, 00:14:59.705 "num_base_bdevs_operational": 4, 00:14:59.705 "process": { 00:14:59.705 "type": "rebuild", 00:14:59.705 "target": "spare", 00:14:59.705 "progress": { 00:14:59.705 "blocks": 65280, 00:14:59.705 "percent": 34 00:14:59.705 } 00:14:59.705 }, 00:14:59.705 "base_bdevs_list": [ 00:14:59.705 { 00:14:59.705 "name": "spare", 00:14:59.705 "uuid": "edf072ef-36a3-5947-bc11-9b521ceafbfe", 00:14:59.705 "is_configured": true, 00:14:59.705 "data_offset": 2048, 00:14:59.705 "data_size": 63488 00:14:59.705 }, 00:14:59.705 { 00:14:59.705 "name": "BaseBdev2", 00:14:59.705 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:14:59.705 "is_configured": true, 00:14:59.705 "data_offset": 2048, 00:14:59.705 "data_size": 63488 00:14:59.706 }, 00:14:59.706 { 00:14:59.706 "name": "BaseBdev3", 00:14:59.706 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:14:59.706 "is_configured": true, 00:14:59.706 "data_offset": 2048, 00:14:59.706 "data_size": 63488 00:14:59.706 }, 00:14:59.706 { 00:14:59.706 "name": "BaseBdev4", 00:14:59.706 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:14:59.706 "is_configured": true, 00:14:59.706 "data_offset": 2048, 00:14:59.706 "data_size": 63488 00:14:59.706 } 00:14:59.706 ] 00:14:59.706 }' 00:14:59.706 01:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.965 01:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:59.965 01:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.965 01:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.965 01:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:00.905 01:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:00.905 01:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.905 01:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.905 01:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.905 01:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.905 01:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.905 01:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.905 01:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.905 01:16:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.905 01:16:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.905 01:16:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.905 01:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.905 "name": "raid_bdev1", 00:15:00.905 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:15:00.905 "strip_size_kb": 64, 00:15:00.905 "state": "online", 00:15:00.905 "raid_level": "raid5f", 00:15:00.905 "superblock": true, 00:15:00.905 "num_base_bdevs": 4, 00:15:00.905 "num_base_bdevs_discovered": 4, 00:15:00.905 "num_base_bdevs_operational": 4, 00:15:00.905 "process": { 00:15:00.905 "type": "rebuild", 00:15:00.905 "target": "spare", 00:15:00.905 "progress": { 00:15:00.905 "blocks": 86400, 00:15:00.905 "percent": 45 00:15:00.905 } 00:15:00.905 }, 00:15:00.905 "base_bdevs_list": [ 00:15:00.905 { 00:15:00.905 "name": "spare", 00:15:00.905 "uuid": "edf072ef-36a3-5947-bc11-9b521ceafbfe", 00:15:00.905 "is_configured": true, 00:15:00.905 "data_offset": 2048, 00:15:00.905 "data_size": 63488 00:15:00.905 }, 00:15:00.905 { 00:15:00.905 "name": "BaseBdev2", 00:15:00.905 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:15:00.905 "is_configured": true, 00:15:00.905 "data_offset": 2048, 00:15:00.905 "data_size": 63488 00:15:00.905 }, 00:15:00.905 { 00:15:00.905 "name": "BaseBdev3", 00:15:00.905 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:15:00.905 "is_configured": true, 00:15:00.905 "data_offset": 2048, 00:15:00.905 "data_size": 63488 00:15:00.905 }, 00:15:00.905 { 00:15:00.905 "name": "BaseBdev4", 00:15:00.905 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:15:00.905 "is_configured": true, 00:15:00.905 "data_offset": 2048, 00:15:00.905 "data_size": 63488 00:15:00.905 } 00:15:00.905 ] 00:15:00.905 }' 00:15:00.905 01:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.905 01:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.905 01:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.165 01:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.165 01:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:02.105 01:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.105 01:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.105 01:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.105 01:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.105 01:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.105 01:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.106 01:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.106 01:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.106 01:16:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.106 01:16:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.106 01:16:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.106 01:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.106 "name": "raid_bdev1", 00:15:02.106 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:15:02.106 "strip_size_kb": 64, 00:15:02.106 "state": "online", 00:15:02.106 "raid_level": "raid5f", 00:15:02.106 "superblock": true, 00:15:02.106 "num_base_bdevs": 4, 00:15:02.106 "num_base_bdevs_discovered": 4, 00:15:02.106 "num_base_bdevs_operational": 4, 00:15:02.106 "process": { 00:15:02.106 "type": "rebuild", 00:15:02.106 "target": "spare", 00:15:02.106 "progress": { 00:15:02.106 "blocks": 109440, 00:15:02.106 "percent": 57 00:15:02.106 } 00:15:02.106 }, 00:15:02.106 "base_bdevs_list": [ 00:15:02.106 { 00:15:02.106 "name": "spare", 00:15:02.106 "uuid": "edf072ef-36a3-5947-bc11-9b521ceafbfe", 00:15:02.106 "is_configured": true, 00:15:02.106 "data_offset": 2048, 00:15:02.106 "data_size": 63488 00:15:02.106 }, 00:15:02.106 { 00:15:02.106 "name": "BaseBdev2", 00:15:02.106 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:15:02.106 "is_configured": true, 00:15:02.106 "data_offset": 2048, 00:15:02.106 "data_size": 63488 00:15:02.106 }, 00:15:02.106 { 00:15:02.106 "name": "BaseBdev3", 00:15:02.106 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:15:02.106 "is_configured": true, 00:15:02.106 "data_offset": 2048, 00:15:02.106 "data_size": 63488 00:15:02.106 }, 00:15:02.106 { 00:15:02.106 "name": "BaseBdev4", 00:15:02.106 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:15:02.106 "is_configured": true, 00:15:02.106 "data_offset": 2048, 00:15:02.106 "data_size": 63488 00:15:02.106 } 00:15:02.106 ] 00:15:02.106 }' 00:15:02.106 01:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.106 01:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.106 01:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.106 01:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.106 01:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:03.102 01:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:03.102 01:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.102 01:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.102 01:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.102 01:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.102 01:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.102 01:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.102 01:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.102 01:16:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.102 01:16:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.102 01:16:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.362 01:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.362 "name": "raid_bdev1", 00:15:03.362 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:15:03.362 "strip_size_kb": 64, 00:15:03.362 "state": "online", 00:15:03.362 "raid_level": "raid5f", 00:15:03.362 "superblock": true, 00:15:03.362 "num_base_bdevs": 4, 00:15:03.362 "num_base_bdevs_discovered": 4, 00:15:03.362 "num_base_bdevs_operational": 4, 00:15:03.362 "process": { 00:15:03.362 "type": "rebuild", 00:15:03.362 "target": "spare", 00:15:03.362 "progress": { 00:15:03.362 "blocks": 130560, 00:15:03.362 "percent": 68 00:15:03.362 } 00:15:03.362 }, 00:15:03.362 "base_bdevs_list": [ 00:15:03.362 { 00:15:03.362 "name": "spare", 00:15:03.362 "uuid": "edf072ef-36a3-5947-bc11-9b521ceafbfe", 00:15:03.362 "is_configured": true, 00:15:03.362 "data_offset": 2048, 00:15:03.362 "data_size": 63488 00:15:03.362 }, 00:15:03.362 { 00:15:03.362 "name": "BaseBdev2", 00:15:03.362 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:15:03.362 "is_configured": true, 00:15:03.362 "data_offset": 2048, 00:15:03.362 "data_size": 63488 00:15:03.362 }, 00:15:03.362 { 00:15:03.362 "name": "BaseBdev3", 00:15:03.362 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:15:03.362 "is_configured": true, 00:15:03.362 "data_offset": 2048, 00:15:03.362 "data_size": 63488 00:15:03.362 }, 00:15:03.362 { 00:15:03.362 "name": "BaseBdev4", 00:15:03.362 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:15:03.362 "is_configured": true, 00:15:03.362 "data_offset": 2048, 00:15:03.362 "data_size": 63488 00:15:03.362 } 00:15:03.362 ] 00:15:03.362 }' 00:15:03.362 01:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.362 01:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.362 01:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.362 01:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.362 01:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:04.303 01:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.303 01:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.303 01:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.303 01:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.303 01:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.303 01:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.303 01:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.303 01:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.303 01:16:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.303 01:16:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.303 01:16:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.303 01:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.303 "name": "raid_bdev1", 00:15:04.303 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:15:04.303 "strip_size_kb": 64, 00:15:04.303 "state": "online", 00:15:04.303 "raid_level": "raid5f", 00:15:04.303 "superblock": true, 00:15:04.303 "num_base_bdevs": 4, 00:15:04.303 "num_base_bdevs_discovered": 4, 00:15:04.303 "num_base_bdevs_operational": 4, 00:15:04.303 "process": { 00:15:04.303 "type": "rebuild", 00:15:04.303 "target": "spare", 00:15:04.303 "progress": { 00:15:04.303 "blocks": 153600, 00:15:04.303 "percent": 80 00:15:04.303 } 00:15:04.303 }, 00:15:04.303 "base_bdevs_list": [ 00:15:04.303 { 00:15:04.303 "name": "spare", 00:15:04.303 "uuid": "edf072ef-36a3-5947-bc11-9b521ceafbfe", 00:15:04.303 "is_configured": true, 00:15:04.303 "data_offset": 2048, 00:15:04.303 "data_size": 63488 00:15:04.303 }, 00:15:04.303 { 00:15:04.303 "name": "BaseBdev2", 00:15:04.303 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:15:04.303 "is_configured": true, 00:15:04.303 "data_offset": 2048, 00:15:04.303 "data_size": 63488 00:15:04.303 }, 00:15:04.303 { 00:15:04.303 "name": "BaseBdev3", 00:15:04.303 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:15:04.303 "is_configured": true, 00:15:04.303 "data_offset": 2048, 00:15:04.303 "data_size": 63488 00:15:04.303 }, 00:15:04.303 { 00:15:04.303 "name": "BaseBdev4", 00:15:04.303 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:15:04.303 "is_configured": true, 00:15:04.303 "data_offset": 2048, 00:15:04.303 "data_size": 63488 00:15:04.303 } 00:15:04.303 ] 00:15:04.303 }' 00:15:04.303 01:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.563 01:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.563 01:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.563 01:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.563 01:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:05.503 01:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.503 01:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.503 01:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.503 01:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.503 01:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.503 01:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.503 01:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.503 01:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.503 01:16:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.503 01:16:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.503 01:16:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.503 01:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.503 "name": "raid_bdev1", 00:15:05.503 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:15:05.503 "strip_size_kb": 64, 00:15:05.503 "state": "online", 00:15:05.503 "raid_level": "raid5f", 00:15:05.503 "superblock": true, 00:15:05.503 "num_base_bdevs": 4, 00:15:05.503 "num_base_bdevs_discovered": 4, 00:15:05.503 "num_base_bdevs_operational": 4, 00:15:05.503 "process": { 00:15:05.503 "type": "rebuild", 00:15:05.503 "target": "spare", 00:15:05.503 "progress": { 00:15:05.503 "blocks": 174720, 00:15:05.503 "percent": 91 00:15:05.503 } 00:15:05.503 }, 00:15:05.503 "base_bdevs_list": [ 00:15:05.503 { 00:15:05.503 "name": "spare", 00:15:05.503 "uuid": "edf072ef-36a3-5947-bc11-9b521ceafbfe", 00:15:05.503 "is_configured": true, 00:15:05.503 "data_offset": 2048, 00:15:05.503 "data_size": 63488 00:15:05.503 }, 00:15:05.503 { 00:15:05.503 "name": "BaseBdev2", 00:15:05.503 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:15:05.503 "is_configured": true, 00:15:05.503 "data_offset": 2048, 00:15:05.503 "data_size": 63488 00:15:05.503 }, 00:15:05.503 { 00:15:05.503 "name": "BaseBdev3", 00:15:05.503 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:15:05.503 "is_configured": true, 00:15:05.503 "data_offset": 2048, 00:15:05.503 "data_size": 63488 00:15:05.503 }, 00:15:05.503 { 00:15:05.503 "name": "BaseBdev4", 00:15:05.503 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:15:05.503 "is_configured": true, 00:15:05.503 "data_offset": 2048, 00:15:05.503 "data_size": 63488 00:15:05.503 } 00:15:05.503 ] 00:15:05.503 }' 00:15:05.503 01:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.503 01:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.503 01:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.762 01:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.762 01:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:06.332 [2024-10-15 01:16:18.941610] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:06.332 [2024-10-15 01:16:18.941784] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:06.332 [2024-10-15 01:16:18.941958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.592 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.592 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.592 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.592 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.592 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.593 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.593 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.593 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.593 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.593 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.593 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.593 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.593 "name": "raid_bdev1", 00:15:06.593 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:15:06.593 "strip_size_kb": 64, 00:15:06.593 "state": "online", 00:15:06.593 "raid_level": "raid5f", 00:15:06.593 "superblock": true, 00:15:06.593 "num_base_bdevs": 4, 00:15:06.593 "num_base_bdevs_discovered": 4, 00:15:06.593 "num_base_bdevs_operational": 4, 00:15:06.593 "base_bdevs_list": [ 00:15:06.593 { 00:15:06.593 "name": "spare", 00:15:06.593 "uuid": "edf072ef-36a3-5947-bc11-9b521ceafbfe", 00:15:06.593 "is_configured": true, 00:15:06.593 "data_offset": 2048, 00:15:06.593 "data_size": 63488 00:15:06.593 }, 00:15:06.593 { 00:15:06.593 "name": "BaseBdev2", 00:15:06.593 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:15:06.593 "is_configured": true, 00:15:06.593 "data_offset": 2048, 00:15:06.593 "data_size": 63488 00:15:06.593 }, 00:15:06.593 { 00:15:06.593 "name": "BaseBdev3", 00:15:06.593 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:15:06.593 "is_configured": true, 00:15:06.593 "data_offset": 2048, 00:15:06.593 "data_size": 63488 00:15:06.593 }, 00:15:06.593 { 00:15:06.593 "name": "BaseBdev4", 00:15:06.593 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:15:06.593 "is_configured": true, 00:15:06.593 "data_offset": 2048, 00:15:06.593 "data_size": 63488 00:15:06.593 } 00:15:06.593 ] 00:15:06.593 }' 00:15:06.593 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.852 "name": "raid_bdev1", 00:15:06.852 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:15:06.852 "strip_size_kb": 64, 00:15:06.852 "state": "online", 00:15:06.852 "raid_level": "raid5f", 00:15:06.852 "superblock": true, 00:15:06.852 "num_base_bdevs": 4, 00:15:06.852 "num_base_bdevs_discovered": 4, 00:15:06.852 "num_base_bdevs_operational": 4, 00:15:06.852 "base_bdevs_list": [ 00:15:06.852 { 00:15:06.852 "name": "spare", 00:15:06.852 "uuid": "edf072ef-36a3-5947-bc11-9b521ceafbfe", 00:15:06.852 "is_configured": true, 00:15:06.852 "data_offset": 2048, 00:15:06.852 "data_size": 63488 00:15:06.852 }, 00:15:06.852 { 00:15:06.852 "name": "BaseBdev2", 00:15:06.852 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:15:06.852 "is_configured": true, 00:15:06.852 "data_offset": 2048, 00:15:06.852 "data_size": 63488 00:15:06.852 }, 00:15:06.852 { 00:15:06.852 "name": "BaseBdev3", 00:15:06.852 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:15:06.852 "is_configured": true, 00:15:06.852 "data_offset": 2048, 00:15:06.852 "data_size": 63488 00:15:06.852 }, 00:15:06.852 { 00:15:06.852 "name": "BaseBdev4", 00:15:06.852 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:15:06.852 "is_configured": true, 00:15:06.852 "data_offset": 2048, 00:15:06.852 "data_size": 63488 00:15:06.852 } 00:15:06.852 ] 00:15:06.852 }' 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.852 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.112 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.112 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.112 "name": "raid_bdev1", 00:15:07.112 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:15:07.112 "strip_size_kb": 64, 00:15:07.112 "state": "online", 00:15:07.112 "raid_level": "raid5f", 00:15:07.112 "superblock": true, 00:15:07.112 "num_base_bdevs": 4, 00:15:07.112 "num_base_bdevs_discovered": 4, 00:15:07.112 "num_base_bdevs_operational": 4, 00:15:07.112 "base_bdevs_list": [ 00:15:07.112 { 00:15:07.112 "name": "spare", 00:15:07.112 "uuid": "edf072ef-36a3-5947-bc11-9b521ceafbfe", 00:15:07.112 "is_configured": true, 00:15:07.112 "data_offset": 2048, 00:15:07.112 "data_size": 63488 00:15:07.112 }, 00:15:07.112 { 00:15:07.112 "name": "BaseBdev2", 00:15:07.112 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:15:07.112 "is_configured": true, 00:15:07.112 "data_offset": 2048, 00:15:07.112 "data_size": 63488 00:15:07.112 }, 00:15:07.112 { 00:15:07.112 "name": "BaseBdev3", 00:15:07.112 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:15:07.112 "is_configured": true, 00:15:07.112 "data_offset": 2048, 00:15:07.112 "data_size": 63488 00:15:07.112 }, 00:15:07.112 { 00:15:07.112 "name": "BaseBdev4", 00:15:07.112 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:15:07.112 "is_configured": true, 00:15:07.112 "data_offset": 2048, 00:15:07.112 "data_size": 63488 00:15:07.112 } 00:15:07.112 ] 00:15:07.112 }' 00:15:07.112 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.112 01:16:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.372 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:07.372 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.372 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.633 [2024-10-15 01:16:20.101896] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:07.633 [2024-10-15 01:16:20.101984] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:07.633 [2024-10-15 01:16:20.102077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.633 [2024-10-15 01:16:20.102166] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.633 [2024-10-15 01:16:20.102194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:07.633 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.633 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.633 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.633 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:07.633 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.633 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.633 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:07.633 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:07.633 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:07.633 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:07.633 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.633 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:07.633 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:07.633 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:07.633 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:07.633 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:07.633 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:07.633 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:07.633 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:07.896 /dev/nbd0 00:15:07.896 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:07.896 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:07.896 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:07.896 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:07.896 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:07.896 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:07.896 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:07.896 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:07.896 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:07.896 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:07.896 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.896 1+0 records in 00:15:07.896 1+0 records out 00:15:07.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045456 s, 9.0 MB/s 00:15:07.896 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.896 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:07.896 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.896 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:07.896 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:07.896 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.896 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:07.896 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:07.896 /dev/nbd1 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:08.156 1+0 records in 00:15:08.156 1+0 records out 00:15:08.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249033 s, 16.4 MB/s 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.156 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:08.416 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:08.416 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:08.417 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:08.417 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:08.417 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:08.417 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:08.417 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:08.417 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:08.417 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.417 01:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.677 [2024-10-15 01:16:21.197617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:08.677 [2024-10-15 01:16:21.197734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.677 [2024-10-15 01:16:21.197776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:08.677 [2024-10-15 01:16:21.197810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.677 [2024-10-15 01:16:21.200021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.677 [2024-10-15 01:16:21.200143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:08.677 [2024-10-15 01:16:21.200282] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:08.677 [2024-10-15 01:16:21.200354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:08.677 [2024-10-15 01:16:21.200526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:08.677 [2024-10-15 01:16:21.200670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:08.677 [2024-10-15 01:16:21.200777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:08.677 spare 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.677 [2024-10-15 01:16:21.300733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:15:08.677 [2024-10-15 01:16:21.300861] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:08.677 [2024-10-15 01:16:21.301273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045820 00:15:08.677 [2024-10-15 01:16:21.301838] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:15:08.677 [2024-10-15 01:16:21.301893] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:15:08.677 [2024-10-15 01:16:21.302118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.677 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.677 "name": "raid_bdev1", 00:15:08.677 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:15:08.677 "strip_size_kb": 64, 00:15:08.677 "state": "online", 00:15:08.677 "raid_level": "raid5f", 00:15:08.677 "superblock": true, 00:15:08.677 "num_base_bdevs": 4, 00:15:08.677 "num_base_bdevs_discovered": 4, 00:15:08.677 "num_base_bdevs_operational": 4, 00:15:08.677 "base_bdevs_list": [ 00:15:08.677 { 00:15:08.677 "name": "spare", 00:15:08.677 "uuid": "edf072ef-36a3-5947-bc11-9b521ceafbfe", 00:15:08.677 "is_configured": true, 00:15:08.677 "data_offset": 2048, 00:15:08.677 "data_size": 63488 00:15:08.677 }, 00:15:08.677 { 00:15:08.677 "name": "BaseBdev2", 00:15:08.677 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:15:08.677 "is_configured": true, 00:15:08.677 "data_offset": 2048, 00:15:08.677 "data_size": 63488 00:15:08.677 }, 00:15:08.677 { 00:15:08.677 "name": "BaseBdev3", 00:15:08.677 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:15:08.677 "is_configured": true, 00:15:08.677 "data_offset": 2048, 00:15:08.677 "data_size": 63488 00:15:08.677 }, 00:15:08.678 { 00:15:08.678 "name": "BaseBdev4", 00:15:08.678 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:15:08.678 "is_configured": true, 00:15:08.678 "data_offset": 2048, 00:15:08.678 "data_size": 63488 00:15:08.678 } 00:15:08.678 ] 00:15:08.678 }' 00:15:08.678 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.678 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.247 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:09.247 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.247 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:09.247 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:09.247 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.247 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.247 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.247 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.247 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.247 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.247 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.247 "name": "raid_bdev1", 00:15:09.247 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:15:09.247 "strip_size_kb": 64, 00:15:09.247 "state": "online", 00:15:09.247 "raid_level": "raid5f", 00:15:09.247 "superblock": true, 00:15:09.247 "num_base_bdevs": 4, 00:15:09.247 "num_base_bdevs_discovered": 4, 00:15:09.247 "num_base_bdevs_operational": 4, 00:15:09.247 "base_bdevs_list": [ 00:15:09.247 { 00:15:09.247 "name": "spare", 00:15:09.247 "uuid": "edf072ef-36a3-5947-bc11-9b521ceafbfe", 00:15:09.247 "is_configured": true, 00:15:09.247 "data_offset": 2048, 00:15:09.247 "data_size": 63488 00:15:09.247 }, 00:15:09.247 { 00:15:09.247 "name": "BaseBdev2", 00:15:09.247 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:15:09.247 "is_configured": true, 00:15:09.247 "data_offset": 2048, 00:15:09.247 "data_size": 63488 00:15:09.247 }, 00:15:09.247 { 00:15:09.247 "name": "BaseBdev3", 00:15:09.247 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:15:09.247 "is_configured": true, 00:15:09.247 "data_offset": 2048, 00:15:09.247 "data_size": 63488 00:15:09.247 }, 00:15:09.247 { 00:15:09.247 "name": "BaseBdev4", 00:15:09.248 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:15:09.248 "is_configured": true, 00:15:09.248 "data_offset": 2048, 00:15:09.248 "data_size": 63488 00:15:09.248 } 00:15:09.248 ] 00:15:09.248 }' 00:15:09.248 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.248 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:09.248 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.248 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:09.248 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.248 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.248 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.248 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:09.248 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.507 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.507 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:09.507 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.507 01:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.507 [2024-10-15 01:16:21.996998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:09.507 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.507 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:09.507 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.507 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.507 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.507 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.507 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.507 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.507 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.507 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.507 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.507 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.507 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.507 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.507 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.507 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.507 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.507 "name": "raid_bdev1", 00:15:09.507 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:15:09.507 "strip_size_kb": 64, 00:15:09.507 "state": "online", 00:15:09.507 "raid_level": "raid5f", 00:15:09.507 "superblock": true, 00:15:09.507 "num_base_bdevs": 4, 00:15:09.507 "num_base_bdevs_discovered": 3, 00:15:09.507 "num_base_bdevs_operational": 3, 00:15:09.507 "base_bdevs_list": [ 00:15:09.507 { 00:15:09.507 "name": null, 00:15:09.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.507 "is_configured": false, 00:15:09.507 "data_offset": 0, 00:15:09.507 "data_size": 63488 00:15:09.507 }, 00:15:09.507 { 00:15:09.507 "name": "BaseBdev2", 00:15:09.507 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:15:09.507 "is_configured": true, 00:15:09.507 "data_offset": 2048, 00:15:09.507 "data_size": 63488 00:15:09.507 }, 00:15:09.507 { 00:15:09.507 "name": "BaseBdev3", 00:15:09.507 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:15:09.507 "is_configured": true, 00:15:09.507 "data_offset": 2048, 00:15:09.507 "data_size": 63488 00:15:09.507 }, 00:15:09.507 { 00:15:09.507 "name": "BaseBdev4", 00:15:09.507 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:15:09.507 "is_configured": true, 00:15:09.507 "data_offset": 2048, 00:15:09.507 "data_size": 63488 00:15:09.507 } 00:15:09.507 ] 00:15:09.507 }' 00:15:09.507 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.507 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.767 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:09.767 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.767 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.767 [2024-10-15 01:16:22.484342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:09.767 [2024-10-15 01:16:22.484638] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:09.767 [2024-10-15 01:16:22.484658] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:09.767 [2024-10-15 01:16:22.484709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:09.767 [2024-10-15 01:16:22.488939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000458f0 00:15:10.027 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.027 01:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:10.027 [2024-10-15 01:16:22.491269] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:10.967 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.967 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.967 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.967 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.967 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.967 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.967 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.967 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.967 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.967 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.967 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.967 "name": "raid_bdev1", 00:15:10.967 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:15:10.967 "strip_size_kb": 64, 00:15:10.967 "state": "online", 00:15:10.967 "raid_level": "raid5f", 00:15:10.967 "superblock": true, 00:15:10.967 "num_base_bdevs": 4, 00:15:10.967 "num_base_bdevs_discovered": 4, 00:15:10.967 "num_base_bdevs_operational": 4, 00:15:10.967 "process": { 00:15:10.967 "type": "rebuild", 00:15:10.967 "target": "spare", 00:15:10.967 "progress": { 00:15:10.967 "blocks": 19200, 00:15:10.967 "percent": 10 00:15:10.967 } 00:15:10.967 }, 00:15:10.967 "base_bdevs_list": [ 00:15:10.967 { 00:15:10.967 "name": "spare", 00:15:10.967 "uuid": "edf072ef-36a3-5947-bc11-9b521ceafbfe", 00:15:10.967 "is_configured": true, 00:15:10.967 "data_offset": 2048, 00:15:10.967 "data_size": 63488 00:15:10.967 }, 00:15:10.967 { 00:15:10.967 "name": "BaseBdev2", 00:15:10.967 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:15:10.967 "is_configured": true, 00:15:10.967 "data_offset": 2048, 00:15:10.967 "data_size": 63488 00:15:10.967 }, 00:15:10.967 { 00:15:10.967 "name": "BaseBdev3", 00:15:10.967 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:15:10.967 "is_configured": true, 00:15:10.967 "data_offset": 2048, 00:15:10.967 "data_size": 63488 00:15:10.967 }, 00:15:10.967 { 00:15:10.967 "name": "BaseBdev4", 00:15:10.967 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:15:10.967 "is_configured": true, 00:15:10.967 "data_offset": 2048, 00:15:10.967 "data_size": 63488 00:15:10.967 } 00:15:10.967 ] 00:15:10.967 }' 00:15:10.967 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.967 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.967 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.967 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.967 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:10.967 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.967 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.967 [2024-10-15 01:16:23.656348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.228 [2024-10-15 01:16:23.699204] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:11.228 [2024-10-15 01:16:23.699286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.228 [2024-10-15 01:16:23.699308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.228 [2024-10-15 01:16:23.699315] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:11.228 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.228 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:11.228 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.228 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.228 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.228 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.228 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.228 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.228 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.228 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.228 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.228 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.228 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.228 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.228 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.228 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.228 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.228 "name": "raid_bdev1", 00:15:11.228 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:15:11.228 "strip_size_kb": 64, 00:15:11.228 "state": "online", 00:15:11.228 "raid_level": "raid5f", 00:15:11.228 "superblock": true, 00:15:11.228 "num_base_bdevs": 4, 00:15:11.228 "num_base_bdevs_discovered": 3, 00:15:11.228 "num_base_bdevs_operational": 3, 00:15:11.228 "base_bdevs_list": [ 00:15:11.228 { 00:15:11.228 "name": null, 00:15:11.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.228 "is_configured": false, 00:15:11.228 "data_offset": 0, 00:15:11.228 "data_size": 63488 00:15:11.228 }, 00:15:11.228 { 00:15:11.228 "name": "BaseBdev2", 00:15:11.228 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:15:11.228 "is_configured": true, 00:15:11.228 "data_offset": 2048, 00:15:11.228 "data_size": 63488 00:15:11.228 }, 00:15:11.228 { 00:15:11.228 "name": "BaseBdev3", 00:15:11.228 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:15:11.228 "is_configured": true, 00:15:11.228 "data_offset": 2048, 00:15:11.228 "data_size": 63488 00:15:11.228 }, 00:15:11.228 { 00:15:11.228 "name": "BaseBdev4", 00:15:11.228 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:15:11.228 "is_configured": true, 00:15:11.228 "data_offset": 2048, 00:15:11.228 "data_size": 63488 00:15:11.228 } 00:15:11.228 ] 00:15:11.228 }' 00:15:11.228 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.228 01:16:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.487 01:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:11.487 01:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.487 01:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.487 [2024-10-15 01:16:24.208129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:11.487 [2024-10-15 01:16:24.208279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.487 [2024-10-15 01:16:24.208327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:11.487 [2024-10-15 01:16:24.208362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.487 [2024-10-15 01:16:24.208855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.487 [2024-10-15 01:16:24.208914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:11.487 [2024-10-15 01:16:24.209044] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:11.487 [2024-10-15 01:16:24.209085] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:11.487 [2024-10-15 01:16:24.209137] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:11.487 [2024-10-15 01:16:24.209224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:11.747 [2024-10-15 01:16:24.213499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000459c0 00:15:11.747 spare 00:15:11.747 01:16:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.747 01:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:11.747 [2024-10-15 01:16:24.215798] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:12.687 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.687 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.687 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.687 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.687 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.687 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.687 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.687 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.687 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.687 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.687 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.687 "name": "raid_bdev1", 00:15:12.687 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:15:12.687 "strip_size_kb": 64, 00:15:12.687 "state": "online", 00:15:12.687 "raid_level": "raid5f", 00:15:12.687 "superblock": true, 00:15:12.687 "num_base_bdevs": 4, 00:15:12.687 "num_base_bdevs_discovered": 4, 00:15:12.687 "num_base_bdevs_operational": 4, 00:15:12.687 "process": { 00:15:12.687 "type": "rebuild", 00:15:12.687 "target": "spare", 00:15:12.687 "progress": { 00:15:12.687 "blocks": 19200, 00:15:12.687 "percent": 10 00:15:12.687 } 00:15:12.687 }, 00:15:12.687 "base_bdevs_list": [ 00:15:12.687 { 00:15:12.687 "name": "spare", 00:15:12.687 "uuid": "edf072ef-36a3-5947-bc11-9b521ceafbfe", 00:15:12.687 "is_configured": true, 00:15:12.687 "data_offset": 2048, 00:15:12.687 "data_size": 63488 00:15:12.687 }, 00:15:12.687 { 00:15:12.687 "name": "BaseBdev2", 00:15:12.687 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:15:12.687 "is_configured": true, 00:15:12.687 "data_offset": 2048, 00:15:12.687 "data_size": 63488 00:15:12.687 }, 00:15:12.687 { 00:15:12.687 "name": "BaseBdev3", 00:15:12.687 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:15:12.687 "is_configured": true, 00:15:12.687 "data_offset": 2048, 00:15:12.687 "data_size": 63488 00:15:12.687 }, 00:15:12.687 { 00:15:12.687 "name": "BaseBdev4", 00:15:12.687 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:15:12.687 "is_configured": true, 00:15:12.687 "data_offset": 2048, 00:15:12.687 "data_size": 63488 00:15:12.687 } 00:15:12.687 ] 00:15:12.687 }' 00:15:12.687 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.687 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.687 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.687 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.687 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:12.687 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.687 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.687 [2024-10-15 01:16:25.376301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.948 [2024-10-15 01:16:25.423672] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:12.948 [2024-10-15 01:16:25.423867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.948 [2024-10-15 01:16:25.423908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.948 [2024-10-15 01:16:25.423932] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:12.948 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.948 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:12.948 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.948 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.948 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.948 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.948 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.948 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.948 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.948 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.948 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.948 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.948 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.948 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.948 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.948 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.948 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.948 "name": "raid_bdev1", 00:15:12.948 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:15:12.948 "strip_size_kb": 64, 00:15:12.948 "state": "online", 00:15:12.948 "raid_level": "raid5f", 00:15:12.948 "superblock": true, 00:15:12.948 "num_base_bdevs": 4, 00:15:12.948 "num_base_bdevs_discovered": 3, 00:15:12.948 "num_base_bdevs_operational": 3, 00:15:12.948 "base_bdevs_list": [ 00:15:12.948 { 00:15:12.948 "name": null, 00:15:12.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.948 "is_configured": false, 00:15:12.948 "data_offset": 0, 00:15:12.948 "data_size": 63488 00:15:12.948 }, 00:15:12.948 { 00:15:12.948 "name": "BaseBdev2", 00:15:12.948 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:15:12.948 "is_configured": true, 00:15:12.948 "data_offset": 2048, 00:15:12.948 "data_size": 63488 00:15:12.948 }, 00:15:12.948 { 00:15:12.948 "name": "BaseBdev3", 00:15:12.948 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:15:12.948 "is_configured": true, 00:15:12.948 "data_offset": 2048, 00:15:12.948 "data_size": 63488 00:15:12.948 }, 00:15:12.948 { 00:15:12.948 "name": "BaseBdev4", 00:15:12.948 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:15:12.948 "is_configured": true, 00:15:12.948 "data_offset": 2048, 00:15:12.948 "data_size": 63488 00:15:12.948 } 00:15:12.948 ] 00:15:12.948 }' 00:15:12.948 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.948 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.207 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:13.207 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.207 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:13.207 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:13.207 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.207 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.207 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.207 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.207 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.207 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.207 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.207 "name": "raid_bdev1", 00:15:13.207 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:15:13.207 "strip_size_kb": 64, 00:15:13.207 "state": "online", 00:15:13.207 "raid_level": "raid5f", 00:15:13.207 "superblock": true, 00:15:13.207 "num_base_bdevs": 4, 00:15:13.207 "num_base_bdevs_discovered": 3, 00:15:13.207 "num_base_bdevs_operational": 3, 00:15:13.207 "base_bdevs_list": [ 00:15:13.207 { 00:15:13.207 "name": null, 00:15:13.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.207 "is_configured": false, 00:15:13.207 "data_offset": 0, 00:15:13.207 "data_size": 63488 00:15:13.207 }, 00:15:13.207 { 00:15:13.207 "name": "BaseBdev2", 00:15:13.207 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:15:13.207 "is_configured": true, 00:15:13.207 "data_offset": 2048, 00:15:13.207 "data_size": 63488 00:15:13.207 }, 00:15:13.207 { 00:15:13.207 "name": "BaseBdev3", 00:15:13.207 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:15:13.207 "is_configured": true, 00:15:13.207 "data_offset": 2048, 00:15:13.207 "data_size": 63488 00:15:13.207 }, 00:15:13.207 { 00:15:13.207 "name": "BaseBdev4", 00:15:13.207 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:15:13.207 "is_configured": true, 00:15:13.207 "data_offset": 2048, 00:15:13.207 "data_size": 63488 00:15:13.207 } 00:15:13.207 ] 00:15:13.207 }' 00:15:13.207 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.467 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:13.467 01:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.467 01:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:13.467 01:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:13.467 01:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.467 01:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.467 01:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.467 01:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:13.467 01:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.467 01:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.467 [2024-10-15 01:16:26.044523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:13.467 [2024-10-15 01:16:26.044589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.467 [2024-10-15 01:16:26.044611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:13.467 [2024-10-15 01:16:26.044623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.467 [2024-10-15 01:16:26.045034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.467 [2024-10-15 01:16:26.045053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:13.467 [2024-10-15 01:16:26.045126] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:13.467 [2024-10-15 01:16:26.045144] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:13.467 [2024-10-15 01:16:26.045152] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:13.467 [2024-10-15 01:16:26.045166] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:13.467 BaseBdev1 00:15:13.467 01:16:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.467 01:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:14.406 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:14.406 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.406 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.406 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.406 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.406 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.406 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.406 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.407 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.407 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.407 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.407 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.407 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.407 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.407 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.407 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.407 "name": "raid_bdev1", 00:15:14.407 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:15:14.407 "strip_size_kb": 64, 00:15:14.407 "state": "online", 00:15:14.407 "raid_level": "raid5f", 00:15:14.407 "superblock": true, 00:15:14.407 "num_base_bdevs": 4, 00:15:14.407 "num_base_bdevs_discovered": 3, 00:15:14.407 "num_base_bdevs_operational": 3, 00:15:14.407 "base_bdevs_list": [ 00:15:14.407 { 00:15:14.407 "name": null, 00:15:14.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.407 "is_configured": false, 00:15:14.407 "data_offset": 0, 00:15:14.407 "data_size": 63488 00:15:14.407 }, 00:15:14.407 { 00:15:14.407 "name": "BaseBdev2", 00:15:14.407 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:15:14.407 "is_configured": true, 00:15:14.407 "data_offset": 2048, 00:15:14.407 "data_size": 63488 00:15:14.407 }, 00:15:14.407 { 00:15:14.407 "name": "BaseBdev3", 00:15:14.407 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:15:14.407 "is_configured": true, 00:15:14.407 "data_offset": 2048, 00:15:14.407 "data_size": 63488 00:15:14.407 }, 00:15:14.407 { 00:15:14.407 "name": "BaseBdev4", 00:15:14.407 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:15:14.407 "is_configured": true, 00:15:14.407 "data_offset": 2048, 00:15:14.407 "data_size": 63488 00:15:14.407 } 00:15:14.407 ] 00:15:14.407 }' 00:15:14.407 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.407 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.976 "name": "raid_bdev1", 00:15:14.976 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:15:14.976 "strip_size_kb": 64, 00:15:14.976 "state": "online", 00:15:14.976 "raid_level": "raid5f", 00:15:14.976 "superblock": true, 00:15:14.976 "num_base_bdevs": 4, 00:15:14.976 "num_base_bdevs_discovered": 3, 00:15:14.976 "num_base_bdevs_operational": 3, 00:15:14.976 "base_bdevs_list": [ 00:15:14.976 { 00:15:14.976 "name": null, 00:15:14.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.976 "is_configured": false, 00:15:14.976 "data_offset": 0, 00:15:14.976 "data_size": 63488 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "name": "BaseBdev2", 00:15:14.976 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:15:14.976 "is_configured": true, 00:15:14.976 "data_offset": 2048, 00:15:14.976 "data_size": 63488 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "name": "BaseBdev3", 00:15:14.976 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:15:14.976 "is_configured": true, 00:15:14.976 "data_offset": 2048, 00:15:14.976 "data_size": 63488 00:15:14.976 }, 00:15:14.976 { 00:15:14.976 "name": "BaseBdev4", 00:15:14.976 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:15:14.976 "is_configured": true, 00:15:14.976 "data_offset": 2048, 00:15:14.976 "data_size": 63488 00:15:14.976 } 00:15:14.976 ] 00:15:14.976 }' 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.976 [2024-10-15 01:16:27.677900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.976 [2024-10-15 01:16:27.678133] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:14.976 [2024-10-15 01:16:27.678201] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:14.976 request: 00:15:14.976 { 00:15:14.976 "base_bdev": "BaseBdev1", 00:15:14.976 "raid_bdev": "raid_bdev1", 00:15:14.976 "method": "bdev_raid_add_base_bdev", 00:15:14.976 "req_id": 1 00:15:14.976 } 00:15:14.976 Got JSON-RPC error response 00:15:14.976 response: 00:15:14.976 { 00:15:14.976 "code": -22, 00:15:14.976 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:14.976 } 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:14.976 01:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:16.355 01:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:16.355 01:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.355 01:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.355 01:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.355 01:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.355 01:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.355 01:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.355 01:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.355 01:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.355 01:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.355 01:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.355 01:16:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.355 01:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.355 01:16:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.355 01:16:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.355 01:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.355 "name": "raid_bdev1", 00:15:16.355 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:15:16.355 "strip_size_kb": 64, 00:15:16.355 "state": "online", 00:15:16.355 "raid_level": "raid5f", 00:15:16.355 "superblock": true, 00:15:16.355 "num_base_bdevs": 4, 00:15:16.355 "num_base_bdevs_discovered": 3, 00:15:16.355 "num_base_bdevs_operational": 3, 00:15:16.355 "base_bdevs_list": [ 00:15:16.355 { 00:15:16.355 "name": null, 00:15:16.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.356 "is_configured": false, 00:15:16.356 "data_offset": 0, 00:15:16.356 "data_size": 63488 00:15:16.356 }, 00:15:16.356 { 00:15:16.356 "name": "BaseBdev2", 00:15:16.356 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:15:16.356 "is_configured": true, 00:15:16.356 "data_offset": 2048, 00:15:16.356 "data_size": 63488 00:15:16.356 }, 00:15:16.356 { 00:15:16.356 "name": "BaseBdev3", 00:15:16.356 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:15:16.356 "is_configured": true, 00:15:16.356 "data_offset": 2048, 00:15:16.356 "data_size": 63488 00:15:16.356 }, 00:15:16.356 { 00:15:16.356 "name": "BaseBdev4", 00:15:16.356 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:15:16.356 "is_configured": true, 00:15:16.356 "data_offset": 2048, 00:15:16.356 "data_size": 63488 00:15:16.356 } 00:15:16.356 ] 00:15:16.356 }' 00:15:16.356 01:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.356 01:16:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.615 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:16.615 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.615 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:16.615 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:16.615 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.615 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.615 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.615 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.615 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.615 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.615 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.615 "name": "raid_bdev1", 00:15:16.615 "uuid": "428fb277-1d5f-45f6-a577-956030b5ade6", 00:15:16.615 "strip_size_kb": 64, 00:15:16.615 "state": "online", 00:15:16.615 "raid_level": "raid5f", 00:15:16.615 "superblock": true, 00:15:16.615 "num_base_bdevs": 4, 00:15:16.615 "num_base_bdevs_discovered": 3, 00:15:16.615 "num_base_bdevs_operational": 3, 00:15:16.615 "base_bdevs_list": [ 00:15:16.615 { 00:15:16.615 "name": null, 00:15:16.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.615 "is_configured": false, 00:15:16.615 "data_offset": 0, 00:15:16.615 "data_size": 63488 00:15:16.615 }, 00:15:16.615 { 00:15:16.615 "name": "BaseBdev2", 00:15:16.615 "uuid": "9323bbe0-a311-5dba-b7e5-152d60c3a3e3", 00:15:16.615 "is_configured": true, 00:15:16.615 "data_offset": 2048, 00:15:16.615 "data_size": 63488 00:15:16.615 }, 00:15:16.615 { 00:15:16.615 "name": "BaseBdev3", 00:15:16.615 "uuid": "9738d003-d700-59ac-80be-d897fe5a081c", 00:15:16.615 "is_configured": true, 00:15:16.615 "data_offset": 2048, 00:15:16.615 "data_size": 63488 00:15:16.615 }, 00:15:16.615 { 00:15:16.615 "name": "BaseBdev4", 00:15:16.615 "uuid": "f374ae86-8e85-5467-bdb6-c26078aa5ea3", 00:15:16.615 "is_configured": true, 00:15:16.615 "data_offset": 2048, 00:15:16.615 "data_size": 63488 00:15:16.615 } 00:15:16.615 ] 00:15:16.615 }' 00:15:16.615 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.615 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:16.615 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.615 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:16.615 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95237 00:15:16.615 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 95237 ']' 00:15:16.615 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 95237 00:15:16.615 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:16.616 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:16.616 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95237 00:15:16.875 killing process with pid 95237 00:15:16.875 Received shutdown signal, test time was about 60.000000 seconds 00:15:16.875 00:15:16.875 Latency(us) 00:15:16.875 [2024-10-15T01:16:29.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.875 [2024-10-15T01:16:29.599Z] =================================================================================================================== 00:15:16.875 [2024-10-15T01:16:29.599Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:16.875 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:16.875 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:16.875 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95237' 00:15:16.875 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 95237 00:15:16.875 [2024-10-15 01:16:29.340173] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:16.875 [2024-10-15 01:16:29.340317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.875 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 95237 00:15:16.875 [2024-10-15 01:16:29.340399] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.875 [2024-10-15 01:16:29.340409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:15:16.875 [2024-10-15 01:16:29.392476] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:16.875 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:16.875 00:15:16.875 real 0m25.574s 00:15:16.875 user 0m32.804s 00:15:16.875 sys 0m3.082s 00:15:16.875 ************************************ 00:15:16.875 END TEST raid5f_rebuild_test_sb 00:15:16.875 ************************************ 00:15:16.875 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:16.875 01:16:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.135 01:16:29 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:15:17.135 01:16:29 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:15:17.135 01:16:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:17.135 01:16:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:17.135 01:16:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:17.135 ************************************ 00:15:17.135 START TEST raid_state_function_test_sb_4k 00:15:17.135 ************************************ 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=96035 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 96035' 00:15:17.135 Process raid pid: 96035 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 96035 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96035 ']' 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:17.135 01:16:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.135 [2024-10-15 01:16:29.750690] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:15:17.135 [2024-10-15 01:16:29.750896] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.395 [2024-10-15 01:16:29.895074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.395 [2024-10-15 01:16:29.924761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.395 [2024-10-15 01:16:29.967283] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.395 [2024-10-15 01:16:29.967423] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.964 [2024-10-15 01:16:30.597114] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:17.964 [2024-10-15 01:16:30.597232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:17.964 [2024-10-15 01:16:30.597267] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:17.964 [2024-10-15 01:16:30.597293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.964 "name": "Existed_Raid", 00:15:17.964 "uuid": "c33b0f7c-9421-4d30-9d8e-c03851b43bc6", 00:15:17.964 "strip_size_kb": 0, 00:15:17.964 "state": "configuring", 00:15:17.964 "raid_level": "raid1", 00:15:17.964 "superblock": true, 00:15:17.964 "num_base_bdevs": 2, 00:15:17.964 "num_base_bdevs_discovered": 0, 00:15:17.964 "num_base_bdevs_operational": 2, 00:15:17.964 "base_bdevs_list": [ 00:15:17.964 { 00:15:17.964 "name": "BaseBdev1", 00:15:17.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.964 "is_configured": false, 00:15:17.964 "data_offset": 0, 00:15:17.964 "data_size": 0 00:15:17.964 }, 00:15:17.964 { 00:15:17.964 "name": "BaseBdev2", 00:15:17.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.964 "is_configured": false, 00:15:17.964 "data_offset": 0, 00:15:17.964 "data_size": 0 00:15:17.964 } 00:15:17.964 ] 00:15:17.964 }' 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.964 01:16:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.534 [2024-10-15 01:16:31.056236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:18.534 [2024-10-15 01:16:31.056335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.534 [2024-10-15 01:16:31.068219] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:18.534 [2024-10-15 01:16:31.068298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:18.534 [2024-10-15 01:16:31.068326] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:18.534 [2024-10-15 01:16:31.068360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.534 [2024-10-15 01:16:31.089134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:18.534 BaseBdev1 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.534 [ 00:15:18.534 { 00:15:18.534 "name": "BaseBdev1", 00:15:18.534 "aliases": [ 00:15:18.534 "75eafc80-a3e0-4b95-a516-082480faa2c3" 00:15:18.534 ], 00:15:18.534 "product_name": "Malloc disk", 00:15:18.534 "block_size": 4096, 00:15:18.534 "num_blocks": 8192, 00:15:18.534 "uuid": "75eafc80-a3e0-4b95-a516-082480faa2c3", 00:15:18.534 "assigned_rate_limits": { 00:15:18.534 "rw_ios_per_sec": 0, 00:15:18.534 "rw_mbytes_per_sec": 0, 00:15:18.534 "r_mbytes_per_sec": 0, 00:15:18.534 "w_mbytes_per_sec": 0 00:15:18.534 }, 00:15:18.534 "claimed": true, 00:15:18.534 "claim_type": "exclusive_write", 00:15:18.534 "zoned": false, 00:15:18.534 "supported_io_types": { 00:15:18.534 "read": true, 00:15:18.534 "write": true, 00:15:18.534 "unmap": true, 00:15:18.534 "flush": true, 00:15:18.534 "reset": true, 00:15:18.534 "nvme_admin": false, 00:15:18.534 "nvme_io": false, 00:15:18.534 "nvme_io_md": false, 00:15:18.534 "write_zeroes": true, 00:15:18.534 "zcopy": true, 00:15:18.534 "get_zone_info": false, 00:15:18.534 "zone_management": false, 00:15:18.534 "zone_append": false, 00:15:18.534 "compare": false, 00:15:18.534 "compare_and_write": false, 00:15:18.534 "abort": true, 00:15:18.534 "seek_hole": false, 00:15:18.534 "seek_data": false, 00:15:18.534 "copy": true, 00:15:18.534 "nvme_iov_md": false 00:15:18.534 }, 00:15:18.534 "memory_domains": [ 00:15:18.534 { 00:15:18.534 "dma_device_id": "system", 00:15:18.534 "dma_device_type": 1 00:15:18.534 }, 00:15:18.534 { 00:15:18.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.534 "dma_device_type": 2 00:15:18.534 } 00:15:18.534 ], 00:15:18.534 "driver_specific": {} 00:15:18.534 } 00:15:18.534 ] 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.534 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.535 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.535 "name": "Existed_Raid", 00:15:18.535 "uuid": "97b0ff09-76c9-4e07-959e-77e7675f602a", 00:15:18.535 "strip_size_kb": 0, 00:15:18.535 "state": "configuring", 00:15:18.535 "raid_level": "raid1", 00:15:18.535 "superblock": true, 00:15:18.535 "num_base_bdevs": 2, 00:15:18.535 "num_base_bdevs_discovered": 1, 00:15:18.535 "num_base_bdevs_operational": 2, 00:15:18.535 "base_bdevs_list": [ 00:15:18.535 { 00:15:18.535 "name": "BaseBdev1", 00:15:18.535 "uuid": "75eafc80-a3e0-4b95-a516-082480faa2c3", 00:15:18.535 "is_configured": true, 00:15:18.535 "data_offset": 256, 00:15:18.535 "data_size": 7936 00:15:18.535 }, 00:15:18.535 { 00:15:18.535 "name": "BaseBdev2", 00:15:18.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.535 "is_configured": false, 00:15:18.535 "data_offset": 0, 00:15:18.535 "data_size": 0 00:15:18.535 } 00:15:18.535 ] 00:15:18.535 }' 00:15:18.535 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.535 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.104 [2024-10-15 01:16:31.556374] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:19.104 [2024-10-15 01:16:31.556475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.104 [2024-10-15 01:16:31.568397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.104 [2024-10-15 01:16:31.570224] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.104 [2024-10-15 01:16:31.570298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.104 "name": "Existed_Raid", 00:15:19.104 "uuid": "b7c4f622-d724-4e19-b8fd-2d3a76ca02dd", 00:15:19.104 "strip_size_kb": 0, 00:15:19.104 "state": "configuring", 00:15:19.104 "raid_level": "raid1", 00:15:19.104 "superblock": true, 00:15:19.104 "num_base_bdevs": 2, 00:15:19.104 "num_base_bdevs_discovered": 1, 00:15:19.104 "num_base_bdevs_operational": 2, 00:15:19.104 "base_bdevs_list": [ 00:15:19.104 { 00:15:19.104 "name": "BaseBdev1", 00:15:19.104 "uuid": "75eafc80-a3e0-4b95-a516-082480faa2c3", 00:15:19.104 "is_configured": true, 00:15:19.104 "data_offset": 256, 00:15:19.104 "data_size": 7936 00:15:19.104 }, 00:15:19.104 { 00:15:19.104 "name": "BaseBdev2", 00:15:19.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.104 "is_configured": false, 00:15:19.104 "data_offset": 0, 00:15:19.104 "data_size": 0 00:15:19.104 } 00:15:19.104 ] 00:15:19.104 }' 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.104 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.364 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:15:19.364 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.364 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.364 [2024-10-15 01:16:31.998680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:19.364 [2024-10-15 01:16:31.998977] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:19.364 [2024-10-15 01:16:31.999029] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:19.364 [2024-10-15 01:16:31.999339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:19.364 BaseBdev2 00:15:19.364 [2024-10-15 01:16:31.999523] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:19.364 [2024-10-15 01:16:31.999548] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:15:19.364 [2024-10-15 01:16:31.999670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.364 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.364 01:16:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:19.364 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:19.364 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:19.364 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:19.364 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:19.364 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:19.364 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:19.364 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.364 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.364 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.364 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:19.364 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.364 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.364 [ 00:15:19.364 { 00:15:19.364 "name": "BaseBdev2", 00:15:19.364 "aliases": [ 00:15:19.364 "904f97d4-0194-42e8-9093-1643dd920a73" 00:15:19.364 ], 00:15:19.364 "product_name": "Malloc disk", 00:15:19.364 "block_size": 4096, 00:15:19.364 "num_blocks": 8192, 00:15:19.364 "uuid": "904f97d4-0194-42e8-9093-1643dd920a73", 00:15:19.364 "assigned_rate_limits": { 00:15:19.364 "rw_ios_per_sec": 0, 00:15:19.364 "rw_mbytes_per_sec": 0, 00:15:19.364 "r_mbytes_per_sec": 0, 00:15:19.364 "w_mbytes_per_sec": 0 00:15:19.364 }, 00:15:19.364 "claimed": true, 00:15:19.364 "claim_type": "exclusive_write", 00:15:19.364 "zoned": false, 00:15:19.364 "supported_io_types": { 00:15:19.364 "read": true, 00:15:19.364 "write": true, 00:15:19.364 "unmap": true, 00:15:19.365 "flush": true, 00:15:19.365 "reset": true, 00:15:19.365 "nvme_admin": false, 00:15:19.365 "nvme_io": false, 00:15:19.365 "nvme_io_md": false, 00:15:19.365 "write_zeroes": true, 00:15:19.365 "zcopy": true, 00:15:19.365 "get_zone_info": false, 00:15:19.365 "zone_management": false, 00:15:19.365 "zone_append": false, 00:15:19.365 "compare": false, 00:15:19.365 "compare_and_write": false, 00:15:19.365 "abort": true, 00:15:19.365 "seek_hole": false, 00:15:19.365 "seek_data": false, 00:15:19.365 "copy": true, 00:15:19.365 "nvme_iov_md": false 00:15:19.365 }, 00:15:19.365 "memory_domains": [ 00:15:19.365 { 00:15:19.365 "dma_device_id": "system", 00:15:19.365 "dma_device_type": 1 00:15:19.365 }, 00:15:19.365 { 00:15:19.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.365 "dma_device_type": 2 00:15:19.365 } 00:15:19.365 ], 00:15:19.365 "driver_specific": {} 00:15:19.365 } 00:15:19.365 ] 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.365 "name": "Existed_Raid", 00:15:19.365 "uuid": "b7c4f622-d724-4e19-b8fd-2d3a76ca02dd", 00:15:19.365 "strip_size_kb": 0, 00:15:19.365 "state": "online", 00:15:19.365 "raid_level": "raid1", 00:15:19.365 "superblock": true, 00:15:19.365 "num_base_bdevs": 2, 00:15:19.365 "num_base_bdevs_discovered": 2, 00:15:19.365 "num_base_bdevs_operational": 2, 00:15:19.365 "base_bdevs_list": [ 00:15:19.365 { 00:15:19.365 "name": "BaseBdev1", 00:15:19.365 "uuid": "75eafc80-a3e0-4b95-a516-082480faa2c3", 00:15:19.365 "is_configured": true, 00:15:19.365 "data_offset": 256, 00:15:19.365 "data_size": 7936 00:15:19.365 }, 00:15:19.365 { 00:15:19.365 "name": "BaseBdev2", 00:15:19.365 "uuid": "904f97d4-0194-42e8-9093-1643dd920a73", 00:15:19.365 "is_configured": true, 00:15:19.365 "data_offset": 256, 00:15:19.365 "data_size": 7936 00:15:19.365 } 00:15:19.365 ] 00:15:19.365 }' 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.365 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.949 [2024-10-15 01:16:32.474192] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:19.949 "name": "Existed_Raid", 00:15:19.949 "aliases": [ 00:15:19.949 "b7c4f622-d724-4e19-b8fd-2d3a76ca02dd" 00:15:19.949 ], 00:15:19.949 "product_name": "Raid Volume", 00:15:19.949 "block_size": 4096, 00:15:19.949 "num_blocks": 7936, 00:15:19.949 "uuid": "b7c4f622-d724-4e19-b8fd-2d3a76ca02dd", 00:15:19.949 "assigned_rate_limits": { 00:15:19.949 "rw_ios_per_sec": 0, 00:15:19.949 "rw_mbytes_per_sec": 0, 00:15:19.949 "r_mbytes_per_sec": 0, 00:15:19.949 "w_mbytes_per_sec": 0 00:15:19.949 }, 00:15:19.949 "claimed": false, 00:15:19.949 "zoned": false, 00:15:19.949 "supported_io_types": { 00:15:19.949 "read": true, 00:15:19.949 "write": true, 00:15:19.949 "unmap": false, 00:15:19.949 "flush": false, 00:15:19.949 "reset": true, 00:15:19.949 "nvme_admin": false, 00:15:19.949 "nvme_io": false, 00:15:19.949 "nvme_io_md": false, 00:15:19.949 "write_zeroes": true, 00:15:19.949 "zcopy": false, 00:15:19.949 "get_zone_info": false, 00:15:19.949 "zone_management": false, 00:15:19.949 "zone_append": false, 00:15:19.949 "compare": false, 00:15:19.949 "compare_and_write": false, 00:15:19.949 "abort": false, 00:15:19.949 "seek_hole": false, 00:15:19.949 "seek_data": false, 00:15:19.949 "copy": false, 00:15:19.949 "nvme_iov_md": false 00:15:19.949 }, 00:15:19.949 "memory_domains": [ 00:15:19.949 { 00:15:19.949 "dma_device_id": "system", 00:15:19.949 "dma_device_type": 1 00:15:19.949 }, 00:15:19.949 { 00:15:19.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.949 "dma_device_type": 2 00:15:19.949 }, 00:15:19.949 { 00:15:19.949 "dma_device_id": "system", 00:15:19.949 "dma_device_type": 1 00:15:19.949 }, 00:15:19.949 { 00:15:19.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.949 "dma_device_type": 2 00:15:19.949 } 00:15:19.949 ], 00:15:19.949 "driver_specific": { 00:15:19.949 "raid": { 00:15:19.949 "uuid": "b7c4f622-d724-4e19-b8fd-2d3a76ca02dd", 00:15:19.949 "strip_size_kb": 0, 00:15:19.949 "state": "online", 00:15:19.949 "raid_level": "raid1", 00:15:19.949 "superblock": true, 00:15:19.949 "num_base_bdevs": 2, 00:15:19.949 "num_base_bdevs_discovered": 2, 00:15:19.949 "num_base_bdevs_operational": 2, 00:15:19.949 "base_bdevs_list": [ 00:15:19.949 { 00:15:19.949 "name": "BaseBdev1", 00:15:19.949 "uuid": "75eafc80-a3e0-4b95-a516-082480faa2c3", 00:15:19.949 "is_configured": true, 00:15:19.949 "data_offset": 256, 00:15:19.949 "data_size": 7936 00:15:19.949 }, 00:15:19.949 { 00:15:19.949 "name": "BaseBdev2", 00:15:19.949 "uuid": "904f97d4-0194-42e8-9093-1643dd920a73", 00:15:19.949 "is_configured": true, 00:15:19.949 "data_offset": 256, 00:15:19.949 "data_size": 7936 00:15:19.949 } 00:15:19.949 ] 00:15:19.949 } 00:15:19.949 } 00:15:19.949 }' 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:19.949 BaseBdev2' 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.949 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.209 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:20.209 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:20.209 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:20.209 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.209 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.209 [2024-10-15 01:16:32.697595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:20.209 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.209 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:20.209 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:20.209 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:20.209 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:20.209 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:20.209 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:20.209 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.209 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.209 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.209 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.209 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:20.209 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.209 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.209 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.209 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.210 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.210 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.210 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.210 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.210 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.210 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.210 "name": "Existed_Raid", 00:15:20.210 "uuid": "b7c4f622-d724-4e19-b8fd-2d3a76ca02dd", 00:15:20.210 "strip_size_kb": 0, 00:15:20.210 "state": "online", 00:15:20.210 "raid_level": "raid1", 00:15:20.210 "superblock": true, 00:15:20.210 "num_base_bdevs": 2, 00:15:20.210 "num_base_bdevs_discovered": 1, 00:15:20.210 "num_base_bdevs_operational": 1, 00:15:20.210 "base_bdevs_list": [ 00:15:20.210 { 00:15:20.210 "name": null, 00:15:20.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.210 "is_configured": false, 00:15:20.210 "data_offset": 0, 00:15:20.210 "data_size": 7936 00:15:20.210 }, 00:15:20.210 { 00:15:20.210 "name": "BaseBdev2", 00:15:20.210 "uuid": "904f97d4-0194-42e8-9093-1643dd920a73", 00:15:20.210 "is_configured": true, 00:15:20.210 "data_offset": 256, 00:15:20.210 "data_size": 7936 00:15:20.210 } 00:15:20.210 ] 00:15:20.210 }' 00:15:20.210 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.210 01:16:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.469 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:20.469 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:20.469 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:20.469 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.469 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.469 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.729 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.729 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:20.729 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:20.729 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:20.729 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.729 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.729 [2024-10-15 01:16:33.235972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:20.729 [2024-10-15 01:16:33.236153] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.729 [2024-10-15 01:16:33.247978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.729 [2024-10-15 01:16:33.248093] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.729 [2024-10-15 01:16:33.248158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:15:20.729 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.729 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:20.730 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:20.730 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.730 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.730 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.730 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:20.730 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.730 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:20.730 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:20.730 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:20.730 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 96035 00:15:20.730 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96035 ']' 00:15:20.730 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96035 00:15:20.730 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:15:20.730 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:20.730 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96035 00:15:20.730 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:20.730 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:20.730 killing process with pid 96035 00:15:20.730 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96035' 00:15:20.730 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96035 00:15:20.730 [2024-10-15 01:16:33.332923] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:20.730 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96035 00:15:20.730 [2024-10-15 01:16:33.333929] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:20.990 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:15:20.990 00:15:20.990 real 0m3.879s 00:15:20.990 user 0m6.110s 00:15:20.990 sys 0m0.831s 00:15:20.990 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:20.990 01:16:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.990 ************************************ 00:15:20.990 END TEST raid_state_function_test_sb_4k 00:15:20.990 ************************************ 00:15:20.990 01:16:33 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:15:20.990 01:16:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:20.990 01:16:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:20.990 01:16:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:20.990 ************************************ 00:15:20.990 START TEST raid_superblock_test_4k 00:15:20.990 ************************************ 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96276 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96276 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 96276 ']' 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:20.990 01:16:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.990 [2024-10-15 01:16:33.683785] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:15:20.990 [2024-10-15 01:16:33.683925] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96276 ] 00:15:21.251 [2024-10-15 01:16:33.825249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.251 [2024-10-15 01:16:33.855110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.251 [2024-10-15 01:16:33.897682] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.251 [2024-10-15 01:16:33.897725] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.821 01:16:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:21.821 01:16:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:15:21.821 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:21.821 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:21.821 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:21.821 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:21.821 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:21.821 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:21.821 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:21.821 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:21.821 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:15:21.821 01:16:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.821 01:16:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.080 malloc1 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.080 [2024-10-15 01:16:34.556738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:22.080 [2024-10-15 01:16:34.556848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.080 [2024-10-15 01:16:34.556886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:22.080 [2024-10-15 01:16:34.556919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.080 [2024-10-15 01:16:34.559151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.080 [2024-10-15 01:16:34.559236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:22.080 pt1 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.080 malloc2 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.080 [2024-10-15 01:16:34.589652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:22.080 [2024-10-15 01:16:34.589756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.080 [2024-10-15 01:16:34.589790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:22.080 [2024-10-15 01:16:34.589819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.080 [2024-10-15 01:16:34.591943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.080 [2024-10-15 01:16:34.592018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:22.080 pt2 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.080 [2024-10-15 01:16:34.601680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:22.080 [2024-10-15 01:16:34.603524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:22.080 [2024-10-15 01:16:34.603714] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:22.080 [2024-10-15 01:16:34.603760] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:22.080 [2024-10-15 01:16:34.604043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:22.080 [2024-10-15 01:16:34.604256] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:22.080 [2024-10-15 01:16:34.604299] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:22.080 [2024-10-15 01:16:34.604481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.080 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.081 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.081 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.081 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.081 01:16:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.081 01:16:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.081 01:16:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.081 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.081 "name": "raid_bdev1", 00:15:22.081 "uuid": "d431fe41-77f6-4b4c-9df4-593ab96b5e93", 00:15:22.081 "strip_size_kb": 0, 00:15:22.081 "state": "online", 00:15:22.081 "raid_level": "raid1", 00:15:22.081 "superblock": true, 00:15:22.081 "num_base_bdevs": 2, 00:15:22.081 "num_base_bdevs_discovered": 2, 00:15:22.081 "num_base_bdevs_operational": 2, 00:15:22.081 "base_bdevs_list": [ 00:15:22.081 { 00:15:22.081 "name": "pt1", 00:15:22.081 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:22.081 "is_configured": true, 00:15:22.081 "data_offset": 256, 00:15:22.081 "data_size": 7936 00:15:22.081 }, 00:15:22.081 { 00:15:22.081 "name": "pt2", 00:15:22.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:22.081 "is_configured": true, 00:15:22.081 "data_offset": 256, 00:15:22.081 "data_size": 7936 00:15:22.081 } 00:15:22.081 ] 00:15:22.081 }' 00:15:22.081 01:16:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.081 01:16:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.649 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:22.649 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:22.649 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:22.649 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:22.649 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:22.649 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:22.649 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:22.649 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:22.649 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.649 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.649 [2024-10-15 01:16:35.089146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.649 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.649 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:22.649 "name": "raid_bdev1", 00:15:22.649 "aliases": [ 00:15:22.649 "d431fe41-77f6-4b4c-9df4-593ab96b5e93" 00:15:22.649 ], 00:15:22.649 "product_name": "Raid Volume", 00:15:22.649 "block_size": 4096, 00:15:22.649 "num_blocks": 7936, 00:15:22.649 "uuid": "d431fe41-77f6-4b4c-9df4-593ab96b5e93", 00:15:22.649 "assigned_rate_limits": { 00:15:22.649 "rw_ios_per_sec": 0, 00:15:22.650 "rw_mbytes_per_sec": 0, 00:15:22.650 "r_mbytes_per_sec": 0, 00:15:22.650 "w_mbytes_per_sec": 0 00:15:22.650 }, 00:15:22.650 "claimed": false, 00:15:22.650 "zoned": false, 00:15:22.650 "supported_io_types": { 00:15:22.650 "read": true, 00:15:22.650 "write": true, 00:15:22.650 "unmap": false, 00:15:22.650 "flush": false, 00:15:22.650 "reset": true, 00:15:22.650 "nvme_admin": false, 00:15:22.650 "nvme_io": false, 00:15:22.650 "nvme_io_md": false, 00:15:22.650 "write_zeroes": true, 00:15:22.650 "zcopy": false, 00:15:22.650 "get_zone_info": false, 00:15:22.650 "zone_management": false, 00:15:22.650 "zone_append": false, 00:15:22.650 "compare": false, 00:15:22.650 "compare_and_write": false, 00:15:22.650 "abort": false, 00:15:22.650 "seek_hole": false, 00:15:22.650 "seek_data": false, 00:15:22.650 "copy": false, 00:15:22.650 "nvme_iov_md": false 00:15:22.650 }, 00:15:22.650 "memory_domains": [ 00:15:22.650 { 00:15:22.650 "dma_device_id": "system", 00:15:22.650 "dma_device_type": 1 00:15:22.650 }, 00:15:22.650 { 00:15:22.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.650 "dma_device_type": 2 00:15:22.650 }, 00:15:22.650 { 00:15:22.650 "dma_device_id": "system", 00:15:22.650 "dma_device_type": 1 00:15:22.650 }, 00:15:22.650 { 00:15:22.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.650 "dma_device_type": 2 00:15:22.650 } 00:15:22.650 ], 00:15:22.650 "driver_specific": { 00:15:22.650 "raid": { 00:15:22.650 "uuid": "d431fe41-77f6-4b4c-9df4-593ab96b5e93", 00:15:22.650 "strip_size_kb": 0, 00:15:22.650 "state": "online", 00:15:22.650 "raid_level": "raid1", 00:15:22.650 "superblock": true, 00:15:22.650 "num_base_bdevs": 2, 00:15:22.650 "num_base_bdevs_discovered": 2, 00:15:22.650 "num_base_bdevs_operational": 2, 00:15:22.650 "base_bdevs_list": [ 00:15:22.650 { 00:15:22.650 "name": "pt1", 00:15:22.650 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:22.650 "is_configured": true, 00:15:22.650 "data_offset": 256, 00:15:22.650 "data_size": 7936 00:15:22.650 }, 00:15:22.650 { 00:15:22.650 "name": "pt2", 00:15:22.650 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:22.650 "is_configured": true, 00:15:22.650 "data_offset": 256, 00:15:22.650 "data_size": 7936 00:15:22.650 } 00:15:22.650 ] 00:15:22.650 } 00:15:22.650 } 00:15:22.650 }' 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:22.650 pt2' 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.650 [2024-10-15 01:16:35.300710] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d431fe41-77f6-4b4c-9df4-593ab96b5e93 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z d431fe41-77f6-4b4c-9df4-593ab96b5e93 ']' 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.650 [2024-10-15 01:16:35.344379] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:22.650 [2024-10-15 01:16:35.344460] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.650 [2024-10-15 01:16:35.344566] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.650 [2024-10-15 01:16:35.344658] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.650 [2024-10-15 01:16:35.344706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.650 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.909 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.909 [2024-10-15 01:16:35.472223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:22.909 [2024-10-15 01:16:35.474158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:22.909 [2024-10-15 01:16:35.474284] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:22.909 [2024-10-15 01:16:35.474399] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:22.909 [2024-10-15 01:16:35.474453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:22.909 [2024-10-15 01:16:35.474483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:15:22.909 request: 00:15:22.909 { 00:15:22.909 "name": "raid_bdev1", 00:15:22.909 "raid_level": "raid1", 00:15:22.909 "base_bdevs": [ 00:15:22.909 "malloc1", 00:15:22.910 "malloc2" 00:15:22.910 ], 00:15:22.910 "superblock": false, 00:15:22.910 "method": "bdev_raid_create", 00:15:22.910 "req_id": 1 00:15:22.910 } 00:15:22.910 Got JSON-RPC error response 00:15:22.910 response: 00:15:22.910 { 00:15:22.910 "code": -17, 00:15:22.910 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:22.910 } 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.910 [2024-10-15 01:16:35.540021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:22.910 [2024-10-15 01:16:35.540137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.910 [2024-10-15 01:16:35.540173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:22.910 [2024-10-15 01:16:35.540210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.910 [2024-10-15 01:16:35.542422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.910 [2024-10-15 01:16:35.542489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:22.910 [2024-10-15 01:16:35.542591] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:22.910 [2024-10-15 01:16:35.542650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:22.910 pt1 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.910 "name": "raid_bdev1", 00:15:22.910 "uuid": "d431fe41-77f6-4b4c-9df4-593ab96b5e93", 00:15:22.910 "strip_size_kb": 0, 00:15:22.910 "state": "configuring", 00:15:22.910 "raid_level": "raid1", 00:15:22.910 "superblock": true, 00:15:22.910 "num_base_bdevs": 2, 00:15:22.910 "num_base_bdevs_discovered": 1, 00:15:22.910 "num_base_bdevs_operational": 2, 00:15:22.910 "base_bdevs_list": [ 00:15:22.910 { 00:15:22.910 "name": "pt1", 00:15:22.910 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:22.910 "is_configured": true, 00:15:22.910 "data_offset": 256, 00:15:22.910 "data_size": 7936 00:15:22.910 }, 00:15:22.910 { 00:15:22.910 "name": null, 00:15:22.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:22.910 "is_configured": false, 00:15:22.910 "data_offset": 256, 00:15:22.910 "data_size": 7936 00:15:22.910 } 00:15:22.910 ] 00:15:22.910 }' 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.910 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.479 [2024-10-15 01:16:35.967317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:23.479 [2024-10-15 01:16:35.967444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.479 [2024-10-15 01:16:35.967481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:23.479 [2024-10-15 01:16:35.967508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.479 [2024-10-15 01:16:35.967941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.479 [2024-10-15 01:16:35.967995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:23.479 [2024-10-15 01:16:35.968096] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:23.479 [2024-10-15 01:16:35.968170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:23.479 [2024-10-15 01:16:35.968307] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:23.479 [2024-10-15 01:16:35.968344] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:23.479 [2024-10-15 01:16:35.968606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:23.479 [2024-10-15 01:16:35.968767] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:23.479 [2024-10-15 01:16:35.968813] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:15:23.479 [2024-10-15 01:16:35.968962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.479 pt2 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.479 01:16:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.479 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.479 "name": "raid_bdev1", 00:15:23.479 "uuid": "d431fe41-77f6-4b4c-9df4-593ab96b5e93", 00:15:23.479 "strip_size_kb": 0, 00:15:23.479 "state": "online", 00:15:23.479 "raid_level": "raid1", 00:15:23.479 "superblock": true, 00:15:23.479 "num_base_bdevs": 2, 00:15:23.479 "num_base_bdevs_discovered": 2, 00:15:23.479 "num_base_bdevs_operational": 2, 00:15:23.479 "base_bdevs_list": [ 00:15:23.479 { 00:15:23.479 "name": "pt1", 00:15:23.479 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:23.479 "is_configured": true, 00:15:23.479 "data_offset": 256, 00:15:23.479 "data_size": 7936 00:15:23.479 }, 00:15:23.479 { 00:15:23.479 "name": "pt2", 00:15:23.479 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:23.479 "is_configured": true, 00:15:23.479 "data_offset": 256, 00:15:23.479 "data_size": 7936 00:15:23.479 } 00:15:23.479 ] 00:15:23.479 }' 00:15:23.479 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.479 01:16:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.739 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:23.739 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:23.739 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:23.739 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:23.739 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:23.739 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:23.739 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:23.739 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:23.739 01:16:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.739 01:16:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.739 [2024-10-15 01:16:36.426769] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.739 01:16:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.739 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:23.739 "name": "raid_bdev1", 00:15:23.739 "aliases": [ 00:15:23.739 "d431fe41-77f6-4b4c-9df4-593ab96b5e93" 00:15:23.739 ], 00:15:23.739 "product_name": "Raid Volume", 00:15:23.739 "block_size": 4096, 00:15:23.739 "num_blocks": 7936, 00:15:23.739 "uuid": "d431fe41-77f6-4b4c-9df4-593ab96b5e93", 00:15:23.739 "assigned_rate_limits": { 00:15:23.739 "rw_ios_per_sec": 0, 00:15:23.739 "rw_mbytes_per_sec": 0, 00:15:23.739 "r_mbytes_per_sec": 0, 00:15:23.739 "w_mbytes_per_sec": 0 00:15:23.739 }, 00:15:23.739 "claimed": false, 00:15:23.739 "zoned": false, 00:15:23.739 "supported_io_types": { 00:15:23.739 "read": true, 00:15:23.739 "write": true, 00:15:23.739 "unmap": false, 00:15:23.739 "flush": false, 00:15:23.739 "reset": true, 00:15:23.739 "nvme_admin": false, 00:15:23.739 "nvme_io": false, 00:15:23.739 "nvme_io_md": false, 00:15:23.739 "write_zeroes": true, 00:15:23.739 "zcopy": false, 00:15:23.739 "get_zone_info": false, 00:15:23.739 "zone_management": false, 00:15:23.739 "zone_append": false, 00:15:23.739 "compare": false, 00:15:23.739 "compare_and_write": false, 00:15:23.739 "abort": false, 00:15:23.739 "seek_hole": false, 00:15:23.739 "seek_data": false, 00:15:23.739 "copy": false, 00:15:23.739 "nvme_iov_md": false 00:15:23.739 }, 00:15:23.739 "memory_domains": [ 00:15:23.739 { 00:15:23.739 "dma_device_id": "system", 00:15:23.739 "dma_device_type": 1 00:15:23.739 }, 00:15:23.739 { 00:15:23.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.739 "dma_device_type": 2 00:15:23.739 }, 00:15:23.739 { 00:15:23.739 "dma_device_id": "system", 00:15:23.739 "dma_device_type": 1 00:15:23.739 }, 00:15:23.739 { 00:15:23.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.739 "dma_device_type": 2 00:15:23.739 } 00:15:23.739 ], 00:15:23.739 "driver_specific": { 00:15:23.739 "raid": { 00:15:23.739 "uuid": "d431fe41-77f6-4b4c-9df4-593ab96b5e93", 00:15:23.739 "strip_size_kb": 0, 00:15:23.739 "state": "online", 00:15:23.739 "raid_level": "raid1", 00:15:23.739 "superblock": true, 00:15:23.739 "num_base_bdevs": 2, 00:15:23.739 "num_base_bdevs_discovered": 2, 00:15:23.739 "num_base_bdevs_operational": 2, 00:15:23.739 "base_bdevs_list": [ 00:15:23.739 { 00:15:23.739 "name": "pt1", 00:15:23.739 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:23.739 "is_configured": true, 00:15:23.739 "data_offset": 256, 00:15:23.739 "data_size": 7936 00:15:23.739 }, 00:15:23.739 { 00:15:23.739 "name": "pt2", 00:15:23.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:23.739 "is_configured": true, 00:15:23.739 "data_offset": 256, 00:15:23.739 "data_size": 7936 00:15:23.739 } 00:15:23.739 ] 00:15:23.739 } 00:15:23.739 } 00:15:23.739 }' 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:23.999 pt2' 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.999 [2024-10-15 01:16:36.666379] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' d431fe41-77f6-4b4c-9df4-593ab96b5e93 '!=' d431fe41-77f6-4b4c-9df4-593ab96b5e93 ']' 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.999 [2024-10-15 01:16:36.714100] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.999 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:24.000 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.000 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.000 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.000 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.259 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.259 01:16:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.259 01:16:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.259 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.259 01:16:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.259 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.259 "name": "raid_bdev1", 00:15:24.259 "uuid": "d431fe41-77f6-4b4c-9df4-593ab96b5e93", 00:15:24.259 "strip_size_kb": 0, 00:15:24.259 "state": "online", 00:15:24.259 "raid_level": "raid1", 00:15:24.259 "superblock": true, 00:15:24.259 "num_base_bdevs": 2, 00:15:24.259 "num_base_bdevs_discovered": 1, 00:15:24.259 "num_base_bdevs_operational": 1, 00:15:24.259 "base_bdevs_list": [ 00:15:24.259 { 00:15:24.259 "name": null, 00:15:24.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.259 "is_configured": false, 00:15:24.259 "data_offset": 0, 00:15:24.259 "data_size": 7936 00:15:24.259 }, 00:15:24.259 { 00:15:24.259 "name": "pt2", 00:15:24.259 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:24.259 "is_configured": true, 00:15:24.259 "data_offset": 256, 00:15:24.259 "data_size": 7936 00:15:24.259 } 00:15:24.259 ] 00:15:24.259 }' 00:15:24.259 01:16:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.259 01:16:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.520 [2024-10-15 01:16:37.129321] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:24.520 [2024-10-15 01:16:37.129422] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:24.520 [2024-10-15 01:16:37.129537] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.520 [2024-10-15 01:16:37.129605] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:24.520 [2024-10-15 01:16:37.129667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.520 [2024-10-15 01:16:37.205150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:24.520 [2024-10-15 01:16:37.205277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.520 [2024-10-15 01:16:37.205335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:24.520 [2024-10-15 01:16:37.205365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.520 [2024-10-15 01:16:37.207534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.520 [2024-10-15 01:16:37.207603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:24.520 [2024-10-15 01:16:37.207706] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:24.520 [2024-10-15 01:16:37.207762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:24.520 [2024-10-15 01:16:37.207870] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:24.520 [2024-10-15 01:16:37.207905] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:24.520 [2024-10-15 01:16:37.208168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:24.520 [2024-10-15 01:16:37.208330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:24.520 pt2 00:15:24.520 [2024-10-15 01:16:37.208378] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:15:24.520 [2024-10-15 01:16:37.208493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.520 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.780 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.780 "name": "raid_bdev1", 00:15:24.780 "uuid": "d431fe41-77f6-4b4c-9df4-593ab96b5e93", 00:15:24.780 "strip_size_kb": 0, 00:15:24.780 "state": "online", 00:15:24.780 "raid_level": "raid1", 00:15:24.780 "superblock": true, 00:15:24.780 "num_base_bdevs": 2, 00:15:24.780 "num_base_bdevs_discovered": 1, 00:15:24.780 "num_base_bdevs_operational": 1, 00:15:24.780 "base_bdevs_list": [ 00:15:24.780 { 00:15:24.780 "name": null, 00:15:24.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.780 "is_configured": false, 00:15:24.780 "data_offset": 256, 00:15:24.780 "data_size": 7936 00:15:24.780 }, 00:15:24.780 { 00:15:24.780 "name": "pt2", 00:15:24.780 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:24.780 "is_configured": true, 00:15:24.780 "data_offset": 256, 00:15:24.780 "data_size": 7936 00:15:24.780 } 00:15:24.780 ] 00:15:24.780 }' 00:15:24.780 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.780 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.040 [2024-10-15 01:16:37.604457] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:25.040 [2024-10-15 01:16:37.604538] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:25.040 [2024-10-15 01:16:37.604638] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:25.040 [2024-10-15 01:16:37.604702] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:25.040 [2024-10-15 01:16:37.604752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.040 [2024-10-15 01:16:37.656382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:25.040 [2024-10-15 01:16:37.656511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.040 [2024-10-15 01:16:37.656547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:15:25.040 [2024-10-15 01:16:37.656580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.040 [2024-10-15 01:16:37.658811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.040 [2024-10-15 01:16:37.658881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:25.040 [2024-10-15 01:16:37.659006] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:25.040 [2024-10-15 01:16:37.659075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:25.040 [2024-10-15 01:16:37.659227] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:25.040 [2024-10-15 01:16:37.659282] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:25.040 [2024-10-15 01:16:37.659327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:15:25.040 [2024-10-15 01:16:37.659408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:25.040 [2024-10-15 01:16:37.659517] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:15:25.040 [2024-10-15 01:16:37.659557] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:25.040 [2024-10-15 01:16:37.659792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:25.040 [2024-10-15 01:16:37.659913] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:15:25.040 [2024-10-15 01:16:37.659923] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:15:25.040 [2024-10-15 01:16:37.660033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.040 pt1 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.040 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.041 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.041 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.041 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.041 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.041 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.041 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.041 "name": "raid_bdev1", 00:15:25.041 "uuid": "d431fe41-77f6-4b4c-9df4-593ab96b5e93", 00:15:25.041 "strip_size_kb": 0, 00:15:25.041 "state": "online", 00:15:25.041 "raid_level": "raid1", 00:15:25.041 "superblock": true, 00:15:25.041 "num_base_bdevs": 2, 00:15:25.041 "num_base_bdevs_discovered": 1, 00:15:25.041 "num_base_bdevs_operational": 1, 00:15:25.041 "base_bdevs_list": [ 00:15:25.041 { 00:15:25.041 "name": null, 00:15:25.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.041 "is_configured": false, 00:15:25.041 "data_offset": 256, 00:15:25.041 "data_size": 7936 00:15:25.041 }, 00:15:25.041 { 00:15:25.041 "name": "pt2", 00:15:25.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:25.041 "is_configured": true, 00:15:25.041 "data_offset": 256, 00:15:25.041 "data_size": 7936 00:15:25.041 } 00:15:25.041 ] 00:15:25.041 }' 00:15:25.041 01:16:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.041 01:16:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.610 [2024-10-15 01:16:38.123858] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' d431fe41-77f6-4b4c-9df4-593ab96b5e93 '!=' d431fe41-77f6-4b4c-9df4-593ab96b5e93 ']' 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96276 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 96276 ']' 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 96276 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96276 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:25.610 killing process with pid 96276 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96276' 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 96276 00:15:25.610 [2024-10-15 01:16:38.192415] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:25.610 [2024-10-15 01:16:38.192522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:25.610 [2024-10-15 01:16:38.192576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:25.610 01:16:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 96276 00:15:25.610 [2024-10-15 01:16:38.192585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:15:25.610 [2024-10-15 01:16:38.215589] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:25.870 ************************************ 00:15:25.870 END TEST raid_superblock_test_4k 00:15:25.870 ************************************ 00:15:25.870 01:16:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:15:25.870 00:15:25.870 real 0m4.819s 00:15:25.870 user 0m7.885s 00:15:25.870 sys 0m1.050s 00:15:25.870 01:16:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:25.870 01:16:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.870 01:16:38 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:15:25.870 01:16:38 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:15:25.870 01:16:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:25.870 01:16:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:25.870 01:16:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:25.870 ************************************ 00:15:25.870 START TEST raid_rebuild_test_sb_4k 00:15:25.870 ************************************ 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=96582 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 96582 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96582 ']' 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:25.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:25.870 01:16:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.870 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:25.870 Zero copy mechanism will not be used. 00:15:25.870 [2024-10-15 01:16:38.589026] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:15:25.870 [2024-10-15 01:16:38.589174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96582 ] 00:15:26.130 [2024-10-15 01:16:38.716670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.130 [2024-10-15 01:16:38.745654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.130 [2024-10-15 01:16:38.788155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.130 [2024-10-15 01:16:38.788210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:27.069 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:27.069 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:15:27.069 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.070 BaseBdev1_malloc 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.070 [2024-10-15 01:16:39.470510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:27.070 [2024-10-15 01:16:39.470623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.070 [2024-10-15 01:16:39.470673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:27.070 [2024-10-15 01:16:39.470708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.070 [2024-10-15 01:16:39.472787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.070 [2024-10-15 01:16:39.472858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:27.070 BaseBdev1 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.070 BaseBdev2_malloc 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.070 [2024-10-15 01:16:39.499203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:27.070 [2024-10-15 01:16:39.499307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.070 [2024-10-15 01:16:39.499346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:27.070 [2024-10-15 01:16:39.499374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.070 [2024-10-15 01:16:39.501468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.070 [2024-10-15 01:16:39.501544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:27.070 BaseBdev2 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.070 spare_malloc 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.070 spare_delay 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.070 [2024-10-15 01:16:39.539985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:27.070 [2024-10-15 01:16:39.540100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.070 [2024-10-15 01:16:39.540169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:27.070 [2024-10-15 01:16:39.540214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.070 [2024-10-15 01:16:39.542429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.070 [2024-10-15 01:16:39.542500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:27.070 spare 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.070 [2024-10-15 01:16:39.551996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.070 [2024-10-15 01:16:39.553923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.070 [2024-10-15 01:16:39.554139] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:27.070 [2024-10-15 01:16:39.554213] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:27.070 [2024-10-15 01:16:39.554558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:27.070 [2024-10-15 01:16:39.554718] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:27.070 [2024-10-15 01:16:39.554731] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:27.070 [2024-10-15 01:16:39.554874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.070 "name": "raid_bdev1", 00:15:27.070 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:27.070 "strip_size_kb": 0, 00:15:27.070 "state": "online", 00:15:27.070 "raid_level": "raid1", 00:15:27.070 "superblock": true, 00:15:27.070 "num_base_bdevs": 2, 00:15:27.070 "num_base_bdevs_discovered": 2, 00:15:27.070 "num_base_bdevs_operational": 2, 00:15:27.070 "base_bdevs_list": [ 00:15:27.070 { 00:15:27.070 "name": "BaseBdev1", 00:15:27.070 "uuid": "20b5f159-fa25-503e-87d5-15039398a8fd", 00:15:27.070 "is_configured": true, 00:15:27.070 "data_offset": 256, 00:15:27.070 "data_size": 7936 00:15:27.070 }, 00:15:27.070 { 00:15:27.070 "name": "BaseBdev2", 00:15:27.070 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:27.070 "is_configured": true, 00:15:27.070 "data_offset": 256, 00:15:27.070 "data_size": 7936 00:15:27.070 } 00:15:27.070 ] 00:15:27.070 }' 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.070 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.330 01:16:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:27.330 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.330 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.330 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:27.330 [2024-10-15 01:16:40.007475] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.330 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.330 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:27.330 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.330 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:27.589 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.589 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.589 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.590 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:27.590 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:27.590 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:27.590 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:27.590 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:27.590 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:27.590 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:27.590 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:27.590 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:27.590 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:27.590 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:27.590 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:27.590 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:27.590 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:27.590 [2024-10-15 01:16:40.294803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:27.850 /dev/nbd0 00:15:27.850 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:27.850 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:27.850 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:27.850 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:27.850 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:27.850 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:27.850 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:27.850 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:27.850 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:27.850 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:27.850 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:27.850 1+0 records in 00:15:27.850 1+0 records out 00:15:27.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548599 s, 7.5 MB/s 00:15:27.850 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.850 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:27.850 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.850 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:27.850 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:27.850 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:27.850 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:27.850 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:27.850 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:27.850 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:28.419 7936+0 records in 00:15:28.419 7936+0 records out 00:15:28.419 32505856 bytes (33 MB, 31 MiB) copied, 0.561002 s, 57.9 MB/s 00:15:28.419 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:28.419 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:28.419 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:28.419 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:28.419 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:28.419 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:28.419 01:16:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:28.679 [2024-10-15 01:16:41.145249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.679 [2024-10-15 01:16:41.162099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.679 "name": "raid_bdev1", 00:15:28.679 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:28.679 "strip_size_kb": 0, 00:15:28.679 "state": "online", 00:15:28.679 "raid_level": "raid1", 00:15:28.679 "superblock": true, 00:15:28.679 "num_base_bdevs": 2, 00:15:28.679 "num_base_bdevs_discovered": 1, 00:15:28.679 "num_base_bdevs_operational": 1, 00:15:28.679 "base_bdevs_list": [ 00:15:28.679 { 00:15:28.679 "name": null, 00:15:28.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.679 "is_configured": false, 00:15:28.679 "data_offset": 0, 00:15:28.679 "data_size": 7936 00:15:28.679 }, 00:15:28.679 { 00:15:28.679 "name": "BaseBdev2", 00:15:28.679 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:28.679 "is_configured": true, 00:15:28.679 "data_offset": 256, 00:15:28.679 "data_size": 7936 00:15:28.679 } 00:15:28.679 ] 00:15:28.679 }' 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.679 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.939 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:28.939 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.939 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.939 [2024-10-15 01:16:41.657281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:29.199 [2024-10-15 01:16:41.673145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:15:29.199 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.199 01:16:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:29.199 [2024-10-15 01:16:41.675567] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:30.138 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.138 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.138 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.138 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.138 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.138 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.138 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.138 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.138 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.138 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.138 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.138 "name": "raid_bdev1", 00:15:30.138 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:30.138 "strip_size_kb": 0, 00:15:30.138 "state": "online", 00:15:30.138 "raid_level": "raid1", 00:15:30.138 "superblock": true, 00:15:30.138 "num_base_bdevs": 2, 00:15:30.138 "num_base_bdevs_discovered": 2, 00:15:30.138 "num_base_bdevs_operational": 2, 00:15:30.138 "process": { 00:15:30.138 "type": "rebuild", 00:15:30.138 "target": "spare", 00:15:30.138 "progress": { 00:15:30.139 "blocks": 2560, 00:15:30.139 "percent": 32 00:15:30.139 } 00:15:30.139 }, 00:15:30.139 "base_bdevs_list": [ 00:15:30.139 { 00:15:30.139 "name": "spare", 00:15:30.139 "uuid": "8259db31-1699-523e-8e3e-2ca8a20aa871", 00:15:30.139 "is_configured": true, 00:15:30.139 "data_offset": 256, 00:15:30.139 "data_size": 7936 00:15:30.139 }, 00:15:30.139 { 00:15:30.139 "name": "BaseBdev2", 00:15:30.139 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:30.139 "is_configured": true, 00:15:30.139 "data_offset": 256, 00:15:30.139 "data_size": 7936 00:15:30.139 } 00:15:30.139 ] 00:15:30.139 }' 00:15:30.139 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.139 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.139 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.139 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.139 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:30.139 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.139 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.139 [2024-10-15 01:16:42.839618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:30.398 [2024-10-15 01:16:42.881344] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:30.398 [2024-10-15 01:16:42.881519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.398 [2024-10-15 01:16:42.881544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:30.398 [2024-10-15 01:16:42.881553] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:30.398 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.398 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:30.398 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.398 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.398 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.398 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.398 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:30.398 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.398 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.398 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.398 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.398 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.398 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.398 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.398 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.398 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.398 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.398 "name": "raid_bdev1", 00:15:30.398 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:30.398 "strip_size_kb": 0, 00:15:30.398 "state": "online", 00:15:30.398 "raid_level": "raid1", 00:15:30.398 "superblock": true, 00:15:30.398 "num_base_bdevs": 2, 00:15:30.398 "num_base_bdevs_discovered": 1, 00:15:30.398 "num_base_bdevs_operational": 1, 00:15:30.398 "base_bdevs_list": [ 00:15:30.398 { 00:15:30.398 "name": null, 00:15:30.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.398 "is_configured": false, 00:15:30.398 "data_offset": 0, 00:15:30.398 "data_size": 7936 00:15:30.398 }, 00:15:30.398 { 00:15:30.398 "name": "BaseBdev2", 00:15:30.398 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:30.398 "is_configured": true, 00:15:30.398 "data_offset": 256, 00:15:30.398 "data_size": 7936 00:15:30.398 } 00:15:30.398 ] 00:15:30.398 }' 00:15:30.398 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.398 01:16:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.658 01:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:30.658 01:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.658 01:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:30.658 01:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:30.658 01:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.658 01:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.658 01:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.658 01:16:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.658 01:16:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.918 01:16:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.918 01:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.918 "name": "raid_bdev1", 00:15:30.918 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:30.918 "strip_size_kb": 0, 00:15:30.918 "state": "online", 00:15:30.918 "raid_level": "raid1", 00:15:30.918 "superblock": true, 00:15:30.918 "num_base_bdevs": 2, 00:15:30.918 "num_base_bdevs_discovered": 1, 00:15:30.918 "num_base_bdevs_operational": 1, 00:15:30.918 "base_bdevs_list": [ 00:15:30.918 { 00:15:30.918 "name": null, 00:15:30.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.918 "is_configured": false, 00:15:30.918 "data_offset": 0, 00:15:30.918 "data_size": 7936 00:15:30.918 }, 00:15:30.918 { 00:15:30.918 "name": "BaseBdev2", 00:15:30.918 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:30.918 "is_configured": true, 00:15:30.918 "data_offset": 256, 00:15:30.918 "data_size": 7936 00:15:30.918 } 00:15:30.918 ] 00:15:30.918 }' 00:15:30.918 01:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.918 01:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:30.918 01:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.918 01:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:30.918 01:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:30.918 01:16:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.918 01:16:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.918 [2024-10-15 01:16:43.513659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:30.918 [2024-10-15 01:16:43.518753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:15:30.918 [2024-10-15 01:16:43.520643] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:30.918 01:16:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.918 01:16:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:31.858 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.858 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.858 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.858 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.858 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.858 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.858 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.858 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.858 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.858 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.858 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.858 "name": "raid_bdev1", 00:15:31.858 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:31.858 "strip_size_kb": 0, 00:15:31.858 "state": "online", 00:15:31.858 "raid_level": "raid1", 00:15:31.858 "superblock": true, 00:15:31.858 "num_base_bdevs": 2, 00:15:31.858 "num_base_bdevs_discovered": 2, 00:15:31.858 "num_base_bdevs_operational": 2, 00:15:31.858 "process": { 00:15:31.858 "type": "rebuild", 00:15:31.858 "target": "spare", 00:15:31.858 "progress": { 00:15:31.858 "blocks": 2560, 00:15:31.858 "percent": 32 00:15:31.858 } 00:15:31.858 }, 00:15:31.858 "base_bdevs_list": [ 00:15:31.858 { 00:15:31.858 "name": "spare", 00:15:31.858 "uuid": "8259db31-1699-523e-8e3e-2ca8a20aa871", 00:15:31.858 "is_configured": true, 00:15:31.858 "data_offset": 256, 00:15:31.858 "data_size": 7936 00:15:31.858 }, 00:15:31.858 { 00:15:31.858 "name": "BaseBdev2", 00:15:31.858 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:31.858 "is_configured": true, 00:15:31.858 "data_offset": 256, 00:15:31.858 "data_size": 7936 00:15:31.858 } 00:15:31.858 ] 00:15:31.858 }' 00:15:32.117 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.117 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.117 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.117 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.117 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:32.117 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:32.117 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:32.117 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:32.117 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:32.117 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:32.117 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=556 00:15:32.117 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:32.117 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.117 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.117 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.117 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.118 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.118 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.118 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.118 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.118 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.118 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.118 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.118 "name": "raid_bdev1", 00:15:32.118 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:32.118 "strip_size_kb": 0, 00:15:32.118 "state": "online", 00:15:32.118 "raid_level": "raid1", 00:15:32.118 "superblock": true, 00:15:32.118 "num_base_bdevs": 2, 00:15:32.118 "num_base_bdevs_discovered": 2, 00:15:32.118 "num_base_bdevs_operational": 2, 00:15:32.118 "process": { 00:15:32.118 "type": "rebuild", 00:15:32.118 "target": "spare", 00:15:32.118 "progress": { 00:15:32.118 "blocks": 2816, 00:15:32.118 "percent": 35 00:15:32.118 } 00:15:32.118 }, 00:15:32.118 "base_bdevs_list": [ 00:15:32.118 { 00:15:32.118 "name": "spare", 00:15:32.118 "uuid": "8259db31-1699-523e-8e3e-2ca8a20aa871", 00:15:32.118 "is_configured": true, 00:15:32.118 "data_offset": 256, 00:15:32.118 "data_size": 7936 00:15:32.118 }, 00:15:32.118 { 00:15:32.118 "name": "BaseBdev2", 00:15:32.118 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:32.118 "is_configured": true, 00:15:32.118 "data_offset": 256, 00:15:32.118 "data_size": 7936 00:15:32.118 } 00:15:32.118 ] 00:15:32.118 }' 00:15:32.118 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.118 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.118 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.118 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.118 01:16:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:33.500 01:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:33.500 01:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.500 01:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.500 01:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.500 01:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.500 01:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.500 01:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.500 01:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.500 01:16:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.500 01:16:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.500 01:16:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.500 01:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.500 "name": "raid_bdev1", 00:15:33.500 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:33.500 "strip_size_kb": 0, 00:15:33.500 "state": "online", 00:15:33.500 "raid_level": "raid1", 00:15:33.500 "superblock": true, 00:15:33.500 "num_base_bdevs": 2, 00:15:33.500 "num_base_bdevs_discovered": 2, 00:15:33.500 "num_base_bdevs_operational": 2, 00:15:33.500 "process": { 00:15:33.500 "type": "rebuild", 00:15:33.500 "target": "spare", 00:15:33.500 "progress": { 00:15:33.500 "blocks": 5632, 00:15:33.500 "percent": 70 00:15:33.500 } 00:15:33.500 }, 00:15:33.500 "base_bdevs_list": [ 00:15:33.500 { 00:15:33.500 "name": "spare", 00:15:33.500 "uuid": "8259db31-1699-523e-8e3e-2ca8a20aa871", 00:15:33.500 "is_configured": true, 00:15:33.500 "data_offset": 256, 00:15:33.500 "data_size": 7936 00:15:33.500 }, 00:15:33.500 { 00:15:33.500 "name": "BaseBdev2", 00:15:33.500 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:33.500 "is_configured": true, 00:15:33.500 "data_offset": 256, 00:15:33.500 "data_size": 7936 00:15:33.500 } 00:15:33.500 ] 00:15:33.500 }' 00:15:33.500 01:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.500 01:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:33.500 01:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.500 01:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:33.500 01:16:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:34.070 [2024-10-15 01:16:46.633790] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:34.070 [2024-10-15 01:16:46.633990] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:34.070 [2024-10-15 01:16:46.634170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.330 01:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:34.330 01:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.330 01:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.330 01:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.330 01:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.330 01:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.330 01:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.330 01:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.330 01:16:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.330 01:16:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.330 01:16:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.330 01:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.330 "name": "raid_bdev1", 00:15:34.330 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:34.330 "strip_size_kb": 0, 00:15:34.330 "state": "online", 00:15:34.330 "raid_level": "raid1", 00:15:34.330 "superblock": true, 00:15:34.330 "num_base_bdevs": 2, 00:15:34.330 "num_base_bdevs_discovered": 2, 00:15:34.330 "num_base_bdevs_operational": 2, 00:15:34.330 "base_bdevs_list": [ 00:15:34.330 { 00:15:34.330 "name": "spare", 00:15:34.330 "uuid": "8259db31-1699-523e-8e3e-2ca8a20aa871", 00:15:34.330 "is_configured": true, 00:15:34.330 "data_offset": 256, 00:15:34.330 "data_size": 7936 00:15:34.330 }, 00:15:34.330 { 00:15:34.330 "name": "BaseBdev2", 00:15:34.330 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:34.330 "is_configured": true, 00:15:34.330 "data_offset": 256, 00:15:34.330 "data_size": 7936 00:15:34.330 } 00:15:34.330 ] 00:15:34.330 }' 00:15:34.330 01:16:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.330 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:34.330 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.591 "name": "raid_bdev1", 00:15:34.591 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:34.591 "strip_size_kb": 0, 00:15:34.591 "state": "online", 00:15:34.591 "raid_level": "raid1", 00:15:34.591 "superblock": true, 00:15:34.591 "num_base_bdevs": 2, 00:15:34.591 "num_base_bdevs_discovered": 2, 00:15:34.591 "num_base_bdevs_operational": 2, 00:15:34.591 "base_bdevs_list": [ 00:15:34.591 { 00:15:34.591 "name": "spare", 00:15:34.591 "uuid": "8259db31-1699-523e-8e3e-2ca8a20aa871", 00:15:34.591 "is_configured": true, 00:15:34.591 "data_offset": 256, 00:15:34.591 "data_size": 7936 00:15:34.591 }, 00:15:34.591 { 00:15:34.591 "name": "BaseBdev2", 00:15:34.591 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:34.591 "is_configured": true, 00:15:34.591 "data_offset": 256, 00:15:34.591 "data_size": 7936 00:15:34.591 } 00:15:34.591 ] 00:15:34.591 }' 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.591 "name": "raid_bdev1", 00:15:34.591 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:34.591 "strip_size_kb": 0, 00:15:34.591 "state": "online", 00:15:34.591 "raid_level": "raid1", 00:15:34.591 "superblock": true, 00:15:34.591 "num_base_bdevs": 2, 00:15:34.591 "num_base_bdevs_discovered": 2, 00:15:34.591 "num_base_bdevs_operational": 2, 00:15:34.591 "base_bdevs_list": [ 00:15:34.591 { 00:15:34.591 "name": "spare", 00:15:34.591 "uuid": "8259db31-1699-523e-8e3e-2ca8a20aa871", 00:15:34.591 "is_configured": true, 00:15:34.591 "data_offset": 256, 00:15:34.591 "data_size": 7936 00:15:34.591 }, 00:15:34.591 { 00:15:34.591 "name": "BaseBdev2", 00:15:34.591 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:34.591 "is_configured": true, 00:15:34.591 "data_offset": 256, 00:15:34.591 "data_size": 7936 00:15:34.591 } 00:15:34.591 ] 00:15:34.591 }' 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.591 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.161 [2024-10-15 01:16:47.625442] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.161 [2024-10-15 01:16:47.625475] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.161 [2024-10-15 01:16:47.625563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.161 [2024-10-15 01:16:47.625634] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.161 [2024-10-15 01:16:47.625648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:35.161 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:35.424 /dev/nbd0 00:15:35.424 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:35.424 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:35.424 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:35.424 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:35.424 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:35.424 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:35.424 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:35.424 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:35.424 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:35.424 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:35.424 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:35.424 1+0 records in 00:15:35.424 1+0 records out 00:15:35.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599725 s, 6.8 MB/s 00:15:35.424 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.424 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:35.424 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.424 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:35.424 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:35.424 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:35.424 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:35.424 01:16:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:35.683 /dev/nbd1 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:35.683 1+0 records in 00:15:35.683 1+0 records out 00:15:35.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555775 s, 7.4 MB/s 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.683 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:35.942 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:35.942 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:35.942 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:35.942 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.942 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.942 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:35.942 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:35.942 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.942 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.942 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.202 [2024-10-15 01:16:48.734415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:36.202 [2024-10-15 01:16:48.734484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.202 [2024-10-15 01:16:48.734505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:36.202 [2024-10-15 01:16:48.734519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.202 [2024-10-15 01:16:48.736875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.202 [2024-10-15 01:16:48.736960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:36.202 [2024-10-15 01:16:48.737105] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:36.202 [2024-10-15 01:16:48.737193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:36.202 [2024-10-15 01:16:48.737377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:36.202 spare 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.202 [2024-10-15 01:16:48.837328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:15:36.202 [2024-10-15 01:16:48.837366] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:36.202 [2024-10-15 01:16:48.837725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:15:36.202 [2024-10-15 01:16:48.837938] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:15:36.202 [2024-10-15 01:16:48.837951] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:15:36.202 [2024-10-15 01:16:48.838123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.202 "name": "raid_bdev1", 00:15:36.202 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:36.202 "strip_size_kb": 0, 00:15:36.202 "state": "online", 00:15:36.202 "raid_level": "raid1", 00:15:36.202 "superblock": true, 00:15:36.202 "num_base_bdevs": 2, 00:15:36.202 "num_base_bdevs_discovered": 2, 00:15:36.202 "num_base_bdevs_operational": 2, 00:15:36.202 "base_bdevs_list": [ 00:15:36.202 { 00:15:36.202 "name": "spare", 00:15:36.202 "uuid": "8259db31-1699-523e-8e3e-2ca8a20aa871", 00:15:36.202 "is_configured": true, 00:15:36.202 "data_offset": 256, 00:15:36.202 "data_size": 7936 00:15:36.202 }, 00:15:36.202 { 00:15:36.202 "name": "BaseBdev2", 00:15:36.202 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:36.202 "is_configured": true, 00:15:36.202 "data_offset": 256, 00:15:36.202 "data_size": 7936 00:15:36.202 } 00:15:36.202 ] 00:15:36.202 }' 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.202 01:16:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.772 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:36.772 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.772 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:36.772 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:36.772 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.772 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.772 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.772 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.772 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.772 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.772 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.772 "name": "raid_bdev1", 00:15:36.772 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:36.772 "strip_size_kb": 0, 00:15:36.772 "state": "online", 00:15:36.772 "raid_level": "raid1", 00:15:36.772 "superblock": true, 00:15:36.772 "num_base_bdevs": 2, 00:15:36.772 "num_base_bdevs_discovered": 2, 00:15:36.772 "num_base_bdevs_operational": 2, 00:15:36.772 "base_bdevs_list": [ 00:15:36.772 { 00:15:36.772 "name": "spare", 00:15:36.772 "uuid": "8259db31-1699-523e-8e3e-2ca8a20aa871", 00:15:36.772 "is_configured": true, 00:15:36.772 "data_offset": 256, 00:15:36.772 "data_size": 7936 00:15:36.772 }, 00:15:36.772 { 00:15:36.772 "name": "BaseBdev2", 00:15:36.772 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:36.772 "is_configured": true, 00:15:36.772 "data_offset": 256, 00:15:36.772 "data_size": 7936 00:15:36.772 } 00:15:36.772 ] 00:15:36.772 }' 00:15:36.772 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.772 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:36.772 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.772 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:36.772 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.772 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.773 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.773 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:36.773 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.773 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.773 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:36.773 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.773 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.773 [2024-10-15 01:16:49.489255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.032 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.032 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:37.032 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.032 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.032 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.032 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.032 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:37.032 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.032 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.032 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.032 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.032 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.032 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.032 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.032 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.032 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.032 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.032 "name": "raid_bdev1", 00:15:37.032 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:37.032 "strip_size_kb": 0, 00:15:37.032 "state": "online", 00:15:37.032 "raid_level": "raid1", 00:15:37.032 "superblock": true, 00:15:37.032 "num_base_bdevs": 2, 00:15:37.032 "num_base_bdevs_discovered": 1, 00:15:37.032 "num_base_bdevs_operational": 1, 00:15:37.032 "base_bdevs_list": [ 00:15:37.032 { 00:15:37.032 "name": null, 00:15:37.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.032 "is_configured": false, 00:15:37.032 "data_offset": 0, 00:15:37.032 "data_size": 7936 00:15:37.032 }, 00:15:37.032 { 00:15:37.032 "name": "BaseBdev2", 00:15:37.032 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:37.032 "is_configured": true, 00:15:37.032 "data_offset": 256, 00:15:37.032 "data_size": 7936 00:15:37.032 } 00:15:37.032 ] 00:15:37.032 }' 00:15:37.032 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.032 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.303 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:37.303 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.303 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.303 [2024-10-15 01:16:49.944483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:37.303 [2024-10-15 01:16:49.944754] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:37.303 [2024-10-15 01:16:49.944815] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:37.303 [2024-10-15 01:16:49.944956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:37.303 [2024-10-15 01:16:49.949888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:15:37.303 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.303 01:16:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:37.303 [2024-10-15 01:16:49.951901] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:38.283 01:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.283 01:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.283 01:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.283 01:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.283 01:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.284 01:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.284 01:16:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.284 01:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.284 01:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:38.284 01:16:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.544 "name": "raid_bdev1", 00:15:38.544 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:38.544 "strip_size_kb": 0, 00:15:38.544 "state": "online", 00:15:38.544 "raid_level": "raid1", 00:15:38.544 "superblock": true, 00:15:38.544 "num_base_bdevs": 2, 00:15:38.544 "num_base_bdevs_discovered": 2, 00:15:38.544 "num_base_bdevs_operational": 2, 00:15:38.544 "process": { 00:15:38.544 "type": "rebuild", 00:15:38.544 "target": "spare", 00:15:38.544 "progress": { 00:15:38.544 "blocks": 2560, 00:15:38.544 "percent": 32 00:15:38.544 } 00:15:38.544 }, 00:15:38.544 "base_bdevs_list": [ 00:15:38.544 { 00:15:38.544 "name": "spare", 00:15:38.544 "uuid": "8259db31-1699-523e-8e3e-2ca8a20aa871", 00:15:38.544 "is_configured": true, 00:15:38.544 "data_offset": 256, 00:15:38.544 "data_size": 7936 00:15:38.544 }, 00:15:38.544 { 00:15:38.544 "name": "BaseBdev2", 00:15:38.544 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:38.544 "is_configured": true, 00:15:38.544 "data_offset": 256, 00:15:38.544 "data_size": 7936 00:15:38.544 } 00:15:38.544 ] 00:15:38.544 }' 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:38.544 [2024-10-15 01:16:51.116619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:38.544 [2024-10-15 01:16:51.157010] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:38.544 [2024-10-15 01:16:51.157195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.544 [2024-10-15 01:16:51.157233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:38.544 [2024-10-15 01:16:51.157243] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.544 "name": "raid_bdev1", 00:15:38.544 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:38.544 "strip_size_kb": 0, 00:15:38.544 "state": "online", 00:15:38.544 "raid_level": "raid1", 00:15:38.544 "superblock": true, 00:15:38.544 "num_base_bdevs": 2, 00:15:38.544 "num_base_bdevs_discovered": 1, 00:15:38.544 "num_base_bdevs_operational": 1, 00:15:38.544 "base_bdevs_list": [ 00:15:38.544 { 00:15:38.544 "name": null, 00:15:38.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.544 "is_configured": false, 00:15:38.544 "data_offset": 0, 00:15:38.544 "data_size": 7936 00:15:38.544 }, 00:15:38.544 { 00:15:38.544 "name": "BaseBdev2", 00:15:38.544 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:38.544 "is_configured": true, 00:15:38.544 "data_offset": 256, 00:15:38.544 "data_size": 7936 00:15:38.544 } 00:15:38.544 ] 00:15:38.544 }' 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.544 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.115 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:39.115 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.115 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.115 [2024-10-15 01:16:51.601296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:39.115 [2024-10-15 01:16:51.601397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.115 [2024-10-15 01:16:51.601425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:39.115 [2024-10-15 01:16:51.601434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.115 [2024-10-15 01:16:51.601889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.115 [2024-10-15 01:16:51.601906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:39.115 [2024-10-15 01:16:51.601994] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:39.115 [2024-10-15 01:16:51.602005] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:39.115 [2024-10-15 01:16:51.602021] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:39.115 [2024-10-15 01:16:51.602041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:39.115 [2024-10-15 01:16:51.606952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:15:39.115 spare 00:15:39.115 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.115 01:16:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:39.115 [2024-10-15 01:16:51.608898] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:40.055 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.055 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.055 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.055 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.055 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.055 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.055 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.055 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.055 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.055 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.055 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.055 "name": "raid_bdev1", 00:15:40.055 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:40.055 "strip_size_kb": 0, 00:15:40.055 "state": "online", 00:15:40.055 "raid_level": "raid1", 00:15:40.055 "superblock": true, 00:15:40.055 "num_base_bdevs": 2, 00:15:40.055 "num_base_bdevs_discovered": 2, 00:15:40.055 "num_base_bdevs_operational": 2, 00:15:40.055 "process": { 00:15:40.055 "type": "rebuild", 00:15:40.055 "target": "spare", 00:15:40.055 "progress": { 00:15:40.055 "blocks": 2560, 00:15:40.055 "percent": 32 00:15:40.055 } 00:15:40.055 }, 00:15:40.055 "base_bdevs_list": [ 00:15:40.055 { 00:15:40.055 "name": "spare", 00:15:40.055 "uuid": "8259db31-1699-523e-8e3e-2ca8a20aa871", 00:15:40.055 "is_configured": true, 00:15:40.055 "data_offset": 256, 00:15:40.055 "data_size": 7936 00:15:40.055 }, 00:15:40.055 { 00:15:40.055 "name": "BaseBdev2", 00:15:40.055 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:40.055 "is_configured": true, 00:15:40.055 "data_offset": 256, 00:15:40.055 "data_size": 7936 00:15:40.055 } 00:15:40.055 ] 00:15:40.055 }' 00:15:40.055 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.055 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:40.055 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.055 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:40.055 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:40.055 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.055 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.055 [2024-10-15 01:16:52.769667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:40.316 [2024-10-15 01:16:52.814044] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:40.316 [2024-10-15 01:16:52.814136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.316 [2024-10-15 01:16:52.814151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:40.316 [2024-10-15 01:16:52.814162] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:40.316 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.316 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:40.316 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.316 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.316 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.316 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:40.316 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:40.316 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.316 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.316 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.316 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.316 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.316 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.316 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.316 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.316 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.316 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.316 "name": "raid_bdev1", 00:15:40.316 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:40.316 "strip_size_kb": 0, 00:15:40.316 "state": "online", 00:15:40.316 "raid_level": "raid1", 00:15:40.316 "superblock": true, 00:15:40.316 "num_base_bdevs": 2, 00:15:40.316 "num_base_bdevs_discovered": 1, 00:15:40.316 "num_base_bdevs_operational": 1, 00:15:40.316 "base_bdevs_list": [ 00:15:40.316 { 00:15:40.316 "name": null, 00:15:40.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.316 "is_configured": false, 00:15:40.316 "data_offset": 0, 00:15:40.316 "data_size": 7936 00:15:40.316 }, 00:15:40.316 { 00:15:40.316 "name": "BaseBdev2", 00:15:40.316 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:40.316 "is_configured": true, 00:15:40.316 "data_offset": 256, 00:15:40.316 "data_size": 7936 00:15:40.316 } 00:15:40.316 ] 00:15:40.316 }' 00:15:40.316 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.316 01:16:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.576 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:40.576 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.576 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:40.576 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:40.576 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.576 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.576 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.576 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.576 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.576 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.837 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.837 "name": "raid_bdev1", 00:15:40.837 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:40.837 "strip_size_kb": 0, 00:15:40.837 "state": "online", 00:15:40.837 "raid_level": "raid1", 00:15:40.837 "superblock": true, 00:15:40.837 "num_base_bdevs": 2, 00:15:40.837 "num_base_bdevs_discovered": 1, 00:15:40.837 "num_base_bdevs_operational": 1, 00:15:40.837 "base_bdevs_list": [ 00:15:40.837 { 00:15:40.837 "name": null, 00:15:40.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.837 "is_configured": false, 00:15:40.837 "data_offset": 0, 00:15:40.837 "data_size": 7936 00:15:40.837 }, 00:15:40.837 { 00:15:40.837 "name": "BaseBdev2", 00:15:40.837 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:40.837 "is_configured": true, 00:15:40.837 "data_offset": 256, 00:15:40.837 "data_size": 7936 00:15:40.837 } 00:15:40.837 ] 00:15:40.837 }' 00:15:40.837 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.837 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:40.837 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.837 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:40.837 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:40.837 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.837 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.837 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.837 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:40.837 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.837 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.837 [2024-10-15 01:16:53.429942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:40.837 [2024-10-15 01:16:53.430080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.837 [2024-10-15 01:16:53.430105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:40.837 [2024-10-15 01:16:53.430116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.837 [2024-10-15 01:16:53.430563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.837 [2024-10-15 01:16:53.430584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:40.837 [2024-10-15 01:16:53.430663] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:40.837 [2024-10-15 01:16:53.430682] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:40.837 [2024-10-15 01:16:53.430698] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:40.837 [2024-10-15 01:16:53.430717] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:40.837 BaseBdev1 00:15:40.837 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.837 01:16:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:41.777 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:41.777 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.777 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.777 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.777 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.777 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:41.777 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.777 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.777 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.777 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.777 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.777 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.777 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.777 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.777 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.777 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.777 "name": "raid_bdev1", 00:15:41.777 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:41.777 "strip_size_kb": 0, 00:15:41.777 "state": "online", 00:15:41.777 "raid_level": "raid1", 00:15:41.777 "superblock": true, 00:15:41.777 "num_base_bdevs": 2, 00:15:41.777 "num_base_bdevs_discovered": 1, 00:15:41.777 "num_base_bdevs_operational": 1, 00:15:41.777 "base_bdevs_list": [ 00:15:41.777 { 00:15:41.777 "name": null, 00:15:41.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.777 "is_configured": false, 00:15:41.777 "data_offset": 0, 00:15:41.777 "data_size": 7936 00:15:41.777 }, 00:15:41.777 { 00:15:41.777 "name": "BaseBdev2", 00:15:41.777 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:41.777 "is_configured": true, 00:15:41.777 "data_offset": 256, 00:15:41.777 "data_size": 7936 00:15:41.777 } 00:15:41.777 ] 00:15:41.777 }' 00:15:41.777 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.777 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:42.347 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:42.347 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.347 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:42.347 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:42.347 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.347 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.347 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.347 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.347 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:42.347 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.347 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.347 "name": "raid_bdev1", 00:15:42.347 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:42.347 "strip_size_kb": 0, 00:15:42.347 "state": "online", 00:15:42.347 "raid_level": "raid1", 00:15:42.347 "superblock": true, 00:15:42.347 "num_base_bdevs": 2, 00:15:42.347 "num_base_bdevs_discovered": 1, 00:15:42.347 "num_base_bdevs_operational": 1, 00:15:42.347 "base_bdevs_list": [ 00:15:42.347 { 00:15:42.347 "name": null, 00:15:42.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.347 "is_configured": false, 00:15:42.347 "data_offset": 0, 00:15:42.347 "data_size": 7936 00:15:42.347 }, 00:15:42.347 { 00:15:42.347 "name": "BaseBdev2", 00:15:42.347 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:42.347 "is_configured": true, 00:15:42.347 "data_offset": 256, 00:15:42.347 "data_size": 7936 00:15:42.347 } 00:15:42.347 ] 00:15:42.347 }' 00:15:42.347 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.347 01:16:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:42.347 01:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.347 01:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:42.347 01:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:42.347 01:16:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:15:42.347 01:16:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:42.347 01:16:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:42.347 01:16:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:42.347 01:16:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:42.347 01:16:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:42.347 01:16:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:42.347 01:16:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.347 01:16:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:42.347 [2024-10-15 01:16:55.055336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.347 [2024-10-15 01:16:55.055565] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:42.347 [2024-10-15 01:16:55.055626] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:42.347 request: 00:15:42.347 { 00:15:42.347 "base_bdev": "BaseBdev1", 00:15:42.347 "raid_bdev": "raid_bdev1", 00:15:42.347 "method": "bdev_raid_add_base_bdev", 00:15:42.347 "req_id": 1 00:15:42.347 } 00:15:42.347 Got JSON-RPC error response 00:15:42.347 response: 00:15:42.347 { 00:15:42.347 "code": -22, 00:15:42.347 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:42.347 } 00:15:42.347 01:16:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:42.347 01:16:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:15:42.347 01:16:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:42.347 01:16:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:42.347 01:16:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:42.347 01:16:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:43.729 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:43.729 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.729 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.729 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.729 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.729 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:43.729 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.729 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.729 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.729 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.729 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.729 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.729 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.729 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.729 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.729 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.729 "name": "raid_bdev1", 00:15:43.729 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:43.729 "strip_size_kb": 0, 00:15:43.729 "state": "online", 00:15:43.729 "raid_level": "raid1", 00:15:43.729 "superblock": true, 00:15:43.729 "num_base_bdevs": 2, 00:15:43.729 "num_base_bdevs_discovered": 1, 00:15:43.729 "num_base_bdevs_operational": 1, 00:15:43.729 "base_bdevs_list": [ 00:15:43.729 { 00:15:43.729 "name": null, 00:15:43.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.729 "is_configured": false, 00:15:43.729 "data_offset": 0, 00:15:43.729 "data_size": 7936 00:15:43.729 }, 00:15:43.729 { 00:15:43.730 "name": "BaseBdev2", 00:15:43.730 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:43.730 "is_configured": true, 00:15:43.730 "data_offset": 256, 00:15:43.730 "data_size": 7936 00:15:43.730 } 00:15:43.730 ] 00:15:43.730 }' 00:15:43.730 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.730 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.989 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:43.989 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.989 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:43.989 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:43.989 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.989 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.989 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.989 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.989 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.989 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.989 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.989 "name": "raid_bdev1", 00:15:43.989 "uuid": "16639b6a-4e88-4779-838e-49649f8b60fc", 00:15:43.989 "strip_size_kb": 0, 00:15:43.989 "state": "online", 00:15:43.989 "raid_level": "raid1", 00:15:43.989 "superblock": true, 00:15:43.989 "num_base_bdevs": 2, 00:15:43.989 "num_base_bdevs_discovered": 1, 00:15:43.989 "num_base_bdevs_operational": 1, 00:15:43.989 "base_bdevs_list": [ 00:15:43.989 { 00:15:43.989 "name": null, 00:15:43.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.989 "is_configured": false, 00:15:43.989 "data_offset": 0, 00:15:43.989 "data_size": 7936 00:15:43.989 }, 00:15:43.989 { 00:15:43.989 "name": "BaseBdev2", 00:15:43.989 "uuid": "775f010f-cb68-5eea-8b0d-fc467de16b5a", 00:15:43.989 "is_configured": true, 00:15:43.989 "data_offset": 256, 00:15:43.989 "data_size": 7936 00:15:43.989 } 00:15:43.989 ] 00:15:43.989 }' 00:15:43.989 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.989 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:43.989 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.990 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.990 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 96582 00:15:43.990 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96582 ']' 00:15:43.990 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96582 00:15:43.990 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:15:43.990 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:43.990 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96582 00:15:43.990 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:43.990 killing process with pid 96582 00:15:43.990 Received shutdown signal, test time was about 60.000000 seconds 00:15:43.990 00:15:43.990 Latency(us) 00:15:43.990 [2024-10-15T01:16:56.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.990 [2024-10-15T01:16:56.714Z] =================================================================================================================== 00:15:43.990 [2024-10-15T01:16:56.714Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:43.990 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:43.990 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96582' 00:15:43.990 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96582 00:15:43.990 [2024-10-15 01:16:56.701300] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:43.990 [2024-10-15 01:16:56.701434] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.990 [2024-10-15 01:16:56.701488] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.990 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96582 00:15:43.990 [2024-10-15 01:16:56.701497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:15:44.249 [2024-10-15 01:16:56.733364] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:44.249 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:15:44.249 00:15:44.249 real 0m18.432s 00:15:44.249 user 0m24.667s 00:15:44.249 sys 0m2.570s 00:15:44.249 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:44.249 ************************************ 00:15:44.249 END TEST raid_rebuild_test_sb_4k 00:15:44.249 ************************************ 00:15:44.249 01:16:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.510 01:16:56 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:15:44.510 01:16:56 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:15:44.510 01:16:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:44.510 01:16:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:44.510 01:16:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:44.510 ************************************ 00:15:44.510 START TEST raid_state_function_test_sb_md_separate 00:15:44.510 ************************************ 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:44.510 Process raid pid: 97260 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97260 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97260' 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97260 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97260 ']' 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:44.510 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:44.510 [2024-10-15 01:16:57.095380] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:15:44.510 [2024-10-15 01:16:57.095502] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.770 [2024-10-15 01:16:57.239207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.770 [2024-10-15 01:16:57.268627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.770 [2024-10-15 01:16:57.311226] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.770 [2024-10-15 01:16:57.311261] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.340 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:45.340 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:45.340 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:45.340 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.340 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.340 [2024-10-15 01:16:57.945383] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:45.340 [2024-10-15 01:16:57.945441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:45.340 [2024-10-15 01:16:57.945454] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.340 [2024-10-15 01:16:57.945464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.340 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.340 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:45.340 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.340 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.340 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.340 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.340 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:45.340 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.340 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.340 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.340 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.340 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.340 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.340 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.340 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.340 01:16:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.340 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.340 "name": "Existed_Raid", 00:15:45.340 "uuid": "9ecd2253-25a6-4ea5-a1d2-7a3705dfd5e7", 00:15:45.340 "strip_size_kb": 0, 00:15:45.340 "state": "configuring", 00:15:45.340 "raid_level": "raid1", 00:15:45.340 "superblock": true, 00:15:45.340 "num_base_bdevs": 2, 00:15:45.340 "num_base_bdevs_discovered": 0, 00:15:45.340 "num_base_bdevs_operational": 2, 00:15:45.340 "base_bdevs_list": [ 00:15:45.340 { 00:15:45.340 "name": "BaseBdev1", 00:15:45.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.340 "is_configured": false, 00:15:45.340 "data_offset": 0, 00:15:45.340 "data_size": 0 00:15:45.340 }, 00:15:45.340 { 00:15:45.340 "name": "BaseBdev2", 00:15:45.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.340 "is_configured": false, 00:15:45.340 "data_offset": 0, 00:15:45.340 "data_size": 0 00:15:45.340 } 00:15:45.340 ] 00:15:45.340 }' 00:15:45.340 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.340 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.909 [2024-10-15 01:16:58.396406] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:45.909 [2024-10-15 01:16:58.396458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.909 [2024-10-15 01:16:58.408392] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:45.909 [2024-10-15 01:16:58.408479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:45.909 [2024-10-15 01:16:58.408526] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.909 [2024-10-15 01:16:58.408563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.909 [2024-10-15 01:16:58.429728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.909 BaseBdev1 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.909 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.909 [ 00:15:45.909 { 00:15:45.909 "name": "BaseBdev1", 00:15:45.909 "aliases": [ 00:15:45.909 "1d42fa84-0008-4032-914e-324aef5952af" 00:15:45.909 ], 00:15:45.909 "product_name": "Malloc disk", 00:15:45.909 "block_size": 4096, 00:15:45.909 "num_blocks": 8192, 00:15:45.909 "uuid": "1d42fa84-0008-4032-914e-324aef5952af", 00:15:45.909 "md_size": 32, 00:15:45.909 "md_interleave": false, 00:15:45.909 "dif_type": 0, 00:15:45.909 "assigned_rate_limits": { 00:15:45.909 "rw_ios_per_sec": 0, 00:15:45.909 "rw_mbytes_per_sec": 0, 00:15:45.909 "r_mbytes_per_sec": 0, 00:15:45.909 "w_mbytes_per_sec": 0 00:15:45.909 }, 00:15:45.909 "claimed": true, 00:15:45.909 "claim_type": "exclusive_write", 00:15:45.909 "zoned": false, 00:15:45.909 "supported_io_types": { 00:15:45.909 "read": true, 00:15:45.909 "write": true, 00:15:45.909 "unmap": true, 00:15:45.909 "flush": true, 00:15:45.909 "reset": true, 00:15:45.909 "nvme_admin": false, 00:15:45.909 "nvme_io": false, 00:15:45.909 "nvme_io_md": false, 00:15:45.909 "write_zeroes": true, 00:15:45.909 "zcopy": true, 00:15:45.909 "get_zone_info": false, 00:15:45.909 "zone_management": false, 00:15:45.909 "zone_append": false, 00:15:45.909 "compare": false, 00:15:45.909 "compare_and_write": false, 00:15:45.909 "abort": true, 00:15:45.909 "seek_hole": false, 00:15:45.909 "seek_data": false, 00:15:45.909 "copy": true, 00:15:45.909 "nvme_iov_md": false 00:15:45.909 }, 00:15:45.909 "memory_domains": [ 00:15:45.909 { 00:15:45.909 "dma_device_id": "system", 00:15:45.909 "dma_device_type": 1 00:15:45.909 }, 00:15:45.909 { 00:15:45.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.910 "dma_device_type": 2 00:15:45.910 } 00:15:45.910 ], 00:15:45.910 "driver_specific": {} 00:15:45.910 } 00:15:45.910 ] 00:15:45.910 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.910 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:15:45.910 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:45.910 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.910 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.910 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.910 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.910 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:45.910 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.910 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.910 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.910 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.910 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.910 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.910 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.910 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.910 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.910 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.910 "name": "Existed_Raid", 00:15:45.910 "uuid": "29cc0374-7eda-4813-aa48-f247a9dea446", 00:15:45.910 "strip_size_kb": 0, 00:15:45.910 "state": "configuring", 00:15:45.910 "raid_level": "raid1", 00:15:45.910 "superblock": true, 00:15:45.910 "num_base_bdevs": 2, 00:15:45.910 "num_base_bdevs_discovered": 1, 00:15:45.910 "num_base_bdevs_operational": 2, 00:15:45.910 "base_bdevs_list": [ 00:15:45.910 { 00:15:45.910 "name": "BaseBdev1", 00:15:45.910 "uuid": "1d42fa84-0008-4032-914e-324aef5952af", 00:15:45.910 "is_configured": true, 00:15:45.910 "data_offset": 256, 00:15:45.910 "data_size": 7936 00:15:45.910 }, 00:15:45.910 { 00:15:45.910 "name": "BaseBdev2", 00:15:45.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.910 "is_configured": false, 00:15:45.910 "data_offset": 0, 00:15:45.910 "data_size": 0 00:15:45.910 } 00:15:45.910 ] 00:15:45.910 }' 00:15:45.910 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.910 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.479 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:46.479 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.479 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.479 [2024-10-15 01:16:58.976875] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:46.479 [2024-10-15 01:16:58.976932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:15:46.479 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.479 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:46.479 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.479 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.479 [2024-10-15 01:16:58.988890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:46.479 [2024-10-15 01:16:58.990834] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:46.479 [2024-10-15 01:16:58.990880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:46.479 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.479 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:46.479 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:46.479 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:46.479 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.479 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.479 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.479 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.479 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:46.479 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.479 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.479 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.479 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.479 01:16:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.479 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.479 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.479 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.479 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.479 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.479 "name": "Existed_Raid", 00:15:46.479 "uuid": "aafc7f65-733d-4967-af8a-4e52e2f9d2b9", 00:15:46.479 "strip_size_kb": 0, 00:15:46.479 "state": "configuring", 00:15:46.479 "raid_level": "raid1", 00:15:46.479 "superblock": true, 00:15:46.479 "num_base_bdevs": 2, 00:15:46.479 "num_base_bdevs_discovered": 1, 00:15:46.479 "num_base_bdevs_operational": 2, 00:15:46.479 "base_bdevs_list": [ 00:15:46.479 { 00:15:46.479 "name": "BaseBdev1", 00:15:46.479 "uuid": "1d42fa84-0008-4032-914e-324aef5952af", 00:15:46.479 "is_configured": true, 00:15:46.479 "data_offset": 256, 00:15:46.479 "data_size": 7936 00:15:46.479 }, 00:15:46.479 { 00:15:46.479 "name": "BaseBdev2", 00:15:46.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.479 "is_configured": false, 00:15:46.479 "data_offset": 0, 00:15:46.479 "data_size": 0 00:15:46.479 } 00:15:46.479 ] 00:15:46.479 }' 00:15:46.479 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.479 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.049 [2024-10-15 01:16:59.479716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:47.049 [2024-10-15 01:16:59.480005] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:47.049 [2024-10-15 01:16:59.480063] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:47.049 [2024-10-15 01:16:59.480171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:47.049 BaseBdev2 00:15:47.049 [2024-10-15 01:16:59.480363] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:47.049 [2024-10-15 01:16:59.480390] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:15:47.049 [2024-10-15 01:16:59.480462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.049 [ 00:15:47.049 { 00:15:47.049 "name": "BaseBdev2", 00:15:47.049 "aliases": [ 00:15:47.049 "49967bca-2ec5-4ae2-9a8d-bd2e42bf30f8" 00:15:47.049 ], 00:15:47.049 "product_name": "Malloc disk", 00:15:47.049 "block_size": 4096, 00:15:47.049 "num_blocks": 8192, 00:15:47.049 "uuid": "49967bca-2ec5-4ae2-9a8d-bd2e42bf30f8", 00:15:47.049 "md_size": 32, 00:15:47.049 "md_interleave": false, 00:15:47.049 "dif_type": 0, 00:15:47.049 "assigned_rate_limits": { 00:15:47.049 "rw_ios_per_sec": 0, 00:15:47.049 "rw_mbytes_per_sec": 0, 00:15:47.049 "r_mbytes_per_sec": 0, 00:15:47.049 "w_mbytes_per_sec": 0 00:15:47.049 }, 00:15:47.049 "claimed": true, 00:15:47.049 "claim_type": "exclusive_write", 00:15:47.049 "zoned": false, 00:15:47.049 "supported_io_types": { 00:15:47.049 "read": true, 00:15:47.049 "write": true, 00:15:47.049 "unmap": true, 00:15:47.049 "flush": true, 00:15:47.049 "reset": true, 00:15:47.049 "nvme_admin": false, 00:15:47.049 "nvme_io": false, 00:15:47.049 "nvme_io_md": false, 00:15:47.049 "write_zeroes": true, 00:15:47.049 "zcopy": true, 00:15:47.049 "get_zone_info": false, 00:15:47.049 "zone_management": false, 00:15:47.049 "zone_append": false, 00:15:47.049 "compare": false, 00:15:47.049 "compare_and_write": false, 00:15:47.049 "abort": true, 00:15:47.049 "seek_hole": false, 00:15:47.049 "seek_data": false, 00:15:47.049 "copy": true, 00:15:47.049 "nvme_iov_md": false 00:15:47.049 }, 00:15:47.049 "memory_domains": [ 00:15:47.049 { 00:15:47.049 "dma_device_id": "system", 00:15:47.049 "dma_device_type": 1 00:15:47.049 }, 00:15:47.049 { 00:15:47.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.049 "dma_device_type": 2 00:15:47.049 } 00:15:47.049 ], 00:15:47.049 "driver_specific": {} 00:15:47.049 } 00:15:47.049 ] 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.049 "name": "Existed_Raid", 00:15:47.049 "uuid": "aafc7f65-733d-4967-af8a-4e52e2f9d2b9", 00:15:47.049 "strip_size_kb": 0, 00:15:47.049 "state": "online", 00:15:47.049 "raid_level": "raid1", 00:15:47.049 "superblock": true, 00:15:47.049 "num_base_bdevs": 2, 00:15:47.049 "num_base_bdevs_discovered": 2, 00:15:47.049 "num_base_bdevs_operational": 2, 00:15:47.049 "base_bdevs_list": [ 00:15:47.049 { 00:15:47.049 "name": "BaseBdev1", 00:15:47.049 "uuid": "1d42fa84-0008-4032-914e-324aef5952af", 00:15:47.049 "is_configured": true, 00:15:47.049 "data_offset": 256, 00:15:47.049 "data_size": 7936 00:15:47.049 }, 00:15:47.049 { 00:15:47.049 "name": "BaseBdev2", 00:15:47.049 "uuid": "49967bca-2ec5-4ae2-9a8d-bd2e42bf30f8", 00:15:47.049 "is_configured": true, 00:15:47.049 "data_offset": 256, 00:15:47.049 "data_size": 7936 00:15:47.049 } 00:15:47.049 ] 00:15:47.049 }' 00:15:47.049 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.050 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.310 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:47.310 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:47.310 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:47.310 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:47.310 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:47.310 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:47.310 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:47.310 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:47.310 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.310 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.310 [2024-10-15 01:16:59.883424] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.310 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.310 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:47.310 "name": "Existed_Raid", 00:15:47.310 "aliases": [ 00:15:47.310 "aafc7f65-733d-4967-af8a-4e52e2f9d2b9" 00:15:47.310 ], 00:15:47.310 "product_name": "Raid Volume", 00:15:47.310 "block_size": 4096, 00:15:47.310 "num_blocks": 7936, 00:15:47.310 "uuid": "aafc7f65-733d-4967-af8a-4e52e2f9d2b9", 00:15:47.310 "md_size": 32, 00:15:47.310 "md_interleave": false, 00:15:47.310 "dif_type": 0, 00:15:47.310 "assigned_rate_limits": { 00:15:47.310 "rw_ios_per_sec": 0, 00:15:47.310 "rw_mbytes_per_sec": 0, 00:15:47.310 "r_mbytes_per_sec": 0, 00:15:47.310 "w_mbytes_per_sec": 0 00:15:47.310 }, 00:15:47.310 "claimed": false, 00:15:47.310 "zoned": false, 00:15:47.310 "supported_io_types": { 00:15:47.310 "read": true, 00:15:47.310 "write": true, 00:15:47.310 "unmap": false, 00:15:47.310 "flush": false, 00:15:47.310 "reset": true, 00:15:47.310 "nvme_admin": false, 00:15:47.310 "nvme_io": false, 00:15:47.310 "nvme_io_md": false, 00:15:47.310 "write_zeroes": true, 00:15:47.310 "zcopy": false, 00:15:47.310 "get_zone_info": false, 00:15:47.310 "zone_management": false, 00:15:47.310 "zone_append": false, 00:15:47.310 "compare": false, 00:15:47.310 "compare_and_write": false, 00:15:47.310 "abort": false, 00:15:47.310 "seek_hole": false, 00:15:47.310 "seek_data": false, 00:15:47.310 "copy": false, 00:15:47.310 "nvme_iov_md": false 00:15:47.310 }, 00:15:47.310 "memory_domains": [ 00:15:47.310 { 00:15:47.310 "dma_device_id": "system", 00:15:47.310 "dma_device_type": 1 00:15:47.310 }, 00:15:47.310 { 00:15:47.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.310 "dma_device_type": 2 00:15:47.310 }, 00:15:47.310 { 00:15:47.310 "dma_device_id": "system", 00:15:47.310 "dma_device_type": 1 00:15:47.310 }, 00:15:47.310 { 00:15:47.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.310 "dma_device_type": 2 00:15:47.310 } 00:15:47.310 ], 00:15:47.310 "driver_specific": { 00:15:47.310 "raid": { 00:15:47.310 "uuid": "aafc7f65-733d-4967-af8a-4e52e2f9d2b9", 00:15:47.310 "strip_size_kb": 0, 00:15:47.310 "state": "online", 00:15:47.310 "raid_level": "raid1", 00:15:47.310 "superblock": true, 00:15:47.310 "num_base_bdevs": 2, 00:15:47.310 "num_base_bdevs_discovered": 2, 00:15:47.310 "num_base_bdevs_operational": 2, 00:15:47.310 "base_bdevs_list": [ 00:15:47.310 { 00:15:47.310 "name": "BaseBdev1", 00:15:47.310 "uuid": "1d42fa84-0008-4032-914e-324aef5952af", 00:15:47.310 "is_configured": true, 00:15:47.310 "data_offset": 256, 00:15:47.310 "data_size": 7936 00:15:47.310 }, 00:15:47.310 { 00:15:47.310 "name": "BaseBdev2", 00:15:47.310 "uuid": "49967bca-2ec5-4ae2-9a8d-bd2e42bf30f8", 00:15:47.310 "is_configured": true, 00:15:47.310 "data_offset": 256, 00:15:47.310 "data_size": 7936 00:15:47.310 } 00:15:47.310 ] 00:15:47.310 } 00:15:47.310 } 00:15:47.310 }' 00:15:47.310 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:47.310 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:47.310 BaseBdev2' 00:15:47.310 01:16:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.310 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:47.310 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.310 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:47.310 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.310 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.310 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.570 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.570 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:47.570 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:47.570 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.570 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:47.570 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.570 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.570 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.570 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.570 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:47.570 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:47.570 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:47.570 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.570 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.570 [2024-10-15 01:17:00.122776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:47.570 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.570 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:47.570 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:47.570 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:47.571 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:47.571 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:47.571 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:47.571 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.571 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.571 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.571 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.571 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:47.571 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.571 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.571 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.571 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.571 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.571 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.571 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.571 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.571 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.571 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.571 "name": "Existed_Raid", 00:15:47.571 "uuid": "aafc7f65-733d-4967-af8a-4e52e2f9d2b9", 00:15:47.571 "strip_size_kb": 0, 00:15:47.571 "state": "online", 00:15:47.571 "raid_level": "raid1", 00:15:47.571 "superblock": true, 00:15:47.571 "num_base_bdevs": 2, 00:15:47.571 "num_base_bdevs_discovered": 1, 00:15:47.571 "num_base_bdevs_operational": 1, 00:15:47.571 "base_bdevs_list": [ 00:15:47.571 { 00:15:47.571 "name": null, 00:15:47.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.571 "is_configured": false, 00:15:47.571 "data_offset": 0, 00:15:47.571 "data_size": 7936 00:15:47.571 }, 00:15:47.571 { 00:15:47.571 "name": "BaseBdev2", 00:15:47.571 "uuid": "49967bca-2ec5-4ae2-9a8d-bd2e42bf30f8", 00:15:47.571 "is_configured": true, 00:15:47.571 "data_offset": 256, 00:15:47.571 "data_size": 7936 00:15:47.571 } 00:15:47.571 ] 00:15:47.571 }' 00:15:47.571 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.571 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.143 [2024-10-15 01:17:00.638132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:48.143 [2024-10-15 01:17:00.638251] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:48.143 [2024-10-15 01:17:00.650593] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.143 [2024-10-15 01:17:00.650640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.143 [2024-10-15 01:17:00.650652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97260 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97260 ']' 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 97260 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97260 00:15:48.143 killing process with pid 97260 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97260' 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 97260 00:15:48.143 [2024-10-15 01:17:00.737681] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:48.143 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 97260 00:15:48.143 [2024-10-15 01:17:00.738703] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:48.403 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:15:48.403 00:15:48.403 real 0m3.941s 00:15:48.403 user 0m6.284s 00:15:48.403 sys 0m0.781s 00:15:48.403 ************************************ 00:15:48.403 END TEST raid_state_function_test_sb_md_separate 00:15:48.403 ************************************ 00:15:48.403 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:48.403 01:17:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.403 01:17:01 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:15:48.403 01:17:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:48.403 01:17:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:48.403 01:17:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:48.403 ************************************ 00:15:48.403 START TEST raid_superblock_test_md_separate 00:15:48.403 ************************************ 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=97497 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 97497 00:15:48.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97497 ']' 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:48.403 01:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.403 [2024-10-15 01:17:01.106390] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:15:48.403 [2024-10-15 01:17:01.106530] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97497 ] 00:15:48.663 [2024-10-15 01:17:01.235399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.663 [2024-10-15 01:17:01.263931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.663 [2024-10-15 01:17:01.306543] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.663 [2024-10-15 01:17:01.306666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.233 01:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:49.233 01:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:49.492 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:49.492 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:49.492 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:49.492 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:49.492 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:49.492 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:49.492 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:49.493 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:49.493 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:15:49.493 01:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.493 01:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.493 malloc1 00:15:49.493 01:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.493 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:49.493 01:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.493 01:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.493 [2024-10-15 01:17:01.985978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:49.493 [2024-10-15 01:17:01.986047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.493 [2024-10-15 01:17:01.986070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:49.493 [2024-10-15 01:17:01.986080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.493 [2024-10-15 01:17:01.988060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.493 [2024-10-15 01:17:01.988099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:49.493 pt1 00:15:49.493 01:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.493 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:49.493 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:49.493 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:49.493 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:49.493 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:49.493 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:49.493 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:49.493 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:49.493 01:17:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:15:49.493 01:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.493 01:17:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.493 malloc2 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.493 [2024-10-15 01:17:02.019301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:49.493 [2024-10-15 01:17:02.019401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.493 [2024-10-15 01:17:02.019436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:49.493 [2024-10-15 01:17:02.019464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.493 [2024-10-15 01:17:02.021442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.493 [2024-10-15 01:17:02.021513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:49.493 pt2 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.493 [2024-10-15 01:17:02.031333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:49.493 [2024-10-15 01:17:02.033276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:49.493 [2024-10-15 01:17:02.033506] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:49.493 [2024-10-15 01:17:02.033556] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:49.493 [2024-10-15 01:17:02.033673] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:49.493 [2024-10-15 01:17:02.033825] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:49.493 [2024-10-15 01:17:02.033868] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:49.493 [2024-10-15 01:17:02.034001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.493 "name": "raid_bdev1", 00:15:49.493 "uuid": "01dbfdfb-0128-4ad9-bc21-04faee6064b3", 00:15:49.493 "strip_size_kb": 0, 00:15:49.493 "state": "online", 00:15:49.493 "raid_level": "raid1", 00:15:49.493 "superblock": true, 00:15:49.493 "num_base_bdevs": 2, 00:15:49.493 "num_base_bdevs_discovered": 2, 00:15:49.493 "num_base_bdevs_operational": 2, 00:15:49.493 "base_bdevs_list": [ 00:15:49.493 { 00:15:49.493 "name": "pt1", 00:15:49.493 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:49.493 "is_configured": true, 00:15:49.493 "data_offset": 256, 00:15:49.493 "data_size": 7936 00:15:49.493 }, 00:15:49.493 { 00:15:49.493 "name": "pt2", 00:15:49.493 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.493 "is_configured": true, 00:15:49.493 "data_offset": 256, 00:15:49.493 "data_size": 7936 00:15:49.493 } 00:15:49.493 ] 00:15:49.493 }' 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.493 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.062 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:50.062 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:50.062 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:50.062 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:50.062 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:50.062 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:50.062 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:50.062 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:50.062 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.062 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.062 [2024-10-15 01:17:02.490862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.062 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.062 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:50.062 "name": "raid_bdev1", 00:15:50.062 "aliases": [ 00:15:50.062 "01dbfdfb-0128-4ad9-bc21-04faee6064b3" 00:15:50.062 ], 00:15:50.062 "product_name": "Raid Volume", 00:15:50.062 "block_size": 4096, 00:15:50.062 "num_blocks": 7936, 00:15:50.062 "uuid": "01dbfdfb-0128-4ad9-bc21-04faee6064b3", 00:15:50.062 "md_size": 32, 00:15:50.062 "md_interleave": false, 00:15:50.062 "dif_type": 0, 00:15:50.062 "assigned_rate_limits": { 00:15:50.062 "rw_ios_per_sec": 0, 00:15:50.062 "rw_mbytes_per_sec": 0, 00:15:50.062 "r_mbytes_per_sec": 0, 00:15:50.062 "w_mbytes_per_sec": 0 00:15:50.062 }, 00:15:50.062 "claimed": false, 00:15:50.062 "zoned": false, 00:15:50.062 "supported_io_types": { 00:15:50.062 "read": true, 00:15:50.062 "write": true, 00:15:50.062 "unmap": false, 00:15:50.062 "flush": false, 00:15:50.062 "reset": true, 00:15:50.062 "nvme_admin": false, 00:15:50.062 "nvme_io": false, 00:15:50.063 "nvme_io_md": false, 00:15:50.063 "write_zeroes": true, 00:15:50.063 "zcopy": false, 00:15:50.063 "get_zone_info": false, 00:15:50.063 "zone_management": false, 00:15:50.063 "zone_append": false, 00:15:50.063 "compare": false, 00:15:50.063 "compare_and_write": false, 00:15:50.063 "abort": false, 00:15:50.063 "seek_hole": false, 00:15:50.063 "seek_data": false, 00:15:50.063 "copy": false, 00:15:50.063 "nvme_iov_md": false 00:15:50.063 }, 00:15:50.063 "memory_domains": [ 00:15:50.063 { 00:15:50.063 "dma_device_id": "system", 00:15:50.063 "dma_device_type": 1 00:15:50.063 }, 00:15:50.063 { 00:15:50.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.063 "dma_device_type": 2 00:15:50.063 }, 00:15:50.063 { 00:15:50.063 "dma_device_id": "system", 00:15:50.063 "dma_device_type": 1 00:15:50.063 }, 00:15:50.063 { 00:15:50.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.063 "dma_device_type": 2 00:15:50.063 } 00:15:50.063 ], 00:15:50.063 "driver_specific": { 00:15:50.063 "raid": { 00:15:50.063 "uuid": "01dbfdfb-0128-4ad9-bc21-04faee6064b3", 00:15:50.063 "strip_size_kb": 0, 00:15:50.063 "state": "online", 00:15:50.063 "raid_level": "raid1", 00:15:50.063 "superblock": true, 00:15:50.063 "num_base_bdevs": 2, 00:15:50.063 "num_base_bdevs_discovered": 2, 00:15:50.063 "num_base_bdevs_operational": 2, 00:15:50.063 "base_bdevs_list": [ 00:15:50.063 { 00:15:50.063 "name": "pt1", 00:15:50.063 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:50.063 "is_configured": true, 00:15:50.063 "data_offset": 256, 00:15:50.063 "data_size": 7936 00:15:50.063 }, 00:15:50.063 { 00:15:50.063 "name": "pt2", 00:15:50.063 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.063 "is_configured": true, 00:15:50.063 "data_offset": 256, 00:15:50.063 "data_size": 7936 00:15:50.063 } 00:15:50.063 ] 00:15:50.063 } 00:15:50.063 } 00:15:50.063 }' 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:50.063 pt2' 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.063 [2024-10-15 01:17:02.750333] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=01dbfdfb-0128-4ad9-bc21-04faee6064b3 00:15:50.063 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 01dbfdfb-0128-4ad9-bc21-04faee6064b3 ']' 00:15:50.323 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:50.323 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.323 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.323 [2024-10-15 01:17:02.793992] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.323 [2024-10-15 01:17:02.794021] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.323 [2024-10-15 01:17:02.794111] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.323 [2024-10-15 01:17:02.794182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.323 [2024-10-15 01:17:02.794215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:50.323 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.323 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:50.323 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.323 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.323 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.323 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.323 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:50.323 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:50.323 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:50.323 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:50.323 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.323 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.323 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.323 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:50.323 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.324 [2024-10-15 01:17:02.929782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:50.324 [2024-10-15 01:17:02.931749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:50.324 [2024-10-15 01:17:02.931862] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:50.324 [2024-10-15 01:17:02.931971] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:50.324 [2024-10-15 01:17:02.932040] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.324 [2024-10-15 01:17:02.932090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:15:50.324 request: 00:15:50.324 { 00:15:50.324 "name": "raid_bdev1", 00:15:50.324 "raid_level": "raid1", 00:15:50.324 "base_bdevs": [ 00:15:50.324 "malloc1", 00:15:50.324 "malloc2" 00:15:50.324 ], 00:15:50.324 "superblock": false, 00:15:50.324 "method": "bdev_raid_create", 00:15:50.324 "req_id": 1 00:15:50.324 } 00:15:50.324 Got JSON-RPC error response 00:15:50.324 response: 00:15:50.324 { 00:15:50.324 "code": -17, 00:15:50.324 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:50.324 } 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.324 [2024-10-15 01:17:02.981664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:50.324 [2024-10-15 01:17:02.981795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.324 [2024-10-15 01:17:02.981839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:50.324 [2024-10-15 01:17:02.981871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.324 [2024-10-15 01:17:02.983898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.324 [2024-10-15 01:17:02.983934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:50.324 [2024-10-15 01:17:02.984009] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:50.324 [2024-10-15 01:17:02.984061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:50.324 pt1 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.324 01:17:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.324 01:17:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.324 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.324 "name": "raid_bdev1", 00:15:50.324 "uuid": "01dbfdfb-0128-4ad9-bc21-04faee6064b3", 00:15:50.324 "strip_size_kb": 0, 00:15:50.324 "state": "configuring", 00:15:50.324 "raid_level": "raid1", 00:15:50.324 "superblock": true, 00:15:50.324 "num_base_bdevs": 2, 00:15:50.324 "num_base_bdevs_discovered": 1, 00:15:50.324 "num_base_bdevs_operational": 2, 00:15:50.324 "base_bdevs_list": [ 00:15:50.324 { 00:15:50.324 "name": "pt1", 00:15:50.324 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:50.324 "is_configured": true, 00:15:50.324 "data_offset": 256, 00:15:50.324 "data_size": 7936 00:15:50.324 }, 00:15:50.324 { 00:15:50.324 "name": null, 00:15:50.324 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.324 "is_configured": false, 00:15:50.324 "data_offset": 256, 00:15:50.324 "data_size": 7936 00:15:50.324 } 00:15:50.324 ] 00:15:50.324 }' 00:15:50.324 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.324 01:17:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.894 [2024-10-15 01:17:03.428896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:50.894 [2024-10-15 01:17:03.429016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.894 [2024-10-15 01:17:03.429057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:50.894 [2024-10-15 01:17:03.429085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.894 [2024-10-15 01:17:03.429330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.894 [2024-10-15 01:17:03.429380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:50.894 [2024-10-15 01:17:03.429460] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:50.894 [2024-10-15 01:17:03.429506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:50.894 [2024-10-15 01:17:03.429626] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:50.894 [2024-10-15 01:17:03.429665] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:50.894 [2024-10-15 01:17:03.429772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:50.894 [2024-10-15 01:17:03.429883] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:50.894 [2024-10-15 01:17:03.429924] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:15:50.894 [2024-10-15 01:17:03.430030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.894 pt2 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.894 "name": "raid_bdev1", 00:15:50.894 "uuid": "01dbfdfb-0128-4ad9-bc21-04faee6064b3", 00:15:50.894 "strip_size_kb": 0, 00:15:50.894 "state": "online", 00:15:50.894 "raid_level": "raid1", 00:15:50.894 "superblock": true, 00:15:50.894 "num_base_bdevs": 2, 00:15:50.894 "num_base_bdevs_discovered": 2, 00:15:50.894 "num_base_bdevs_operational": 2, 00:15:50.894 "base_bdevs_list": [ 00:15:50.894 { 00:15:50.894 "name": "pt1", 00:15:50.894 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:50.894 "is_configured": true, 00:15:50.894 "data_offset": 256, 00:15:50.894 "data_size": 7936 00:15:50.894 }, 00:15:50.894 { 00:15:50.894 "name": "pt2", 00:15:50.894 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.894 "is_configured": true, 00:15:50.894 "data_offset": 256, 00:15:50.894 "data_size": 7936 00:15:50.894 } 00:15:50.894 ] 00:15:50.894 }' 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.894 01:17:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.464 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:51.464 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:51.464 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:51.464 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:51.464 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:51.464 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:51.464 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:51.464 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:51.464 01:17:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.464 01:17:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.464 [2024-10-15 01:17:03.900441] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.464 01:17:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.464 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:51.464 "name": "raid_bdev1", 00:15:51.464 "aliases": [ 00:15:51.464 "01dbfdfb-0128-4ad9-bc21-04faee6064b3" 00:15:51.464 ], 00:15:51.464 "product_name": "Raid Volume", 00:15:51.464 "block_size": 4096, 00:15:51.464 "num_blocks": 7936, 00:15:51.464 "uuid": "01dbfdfb-0128-4ad9-bc21-04faee6064b3", 00:15:51.464 "md_size": 32, 00:15:51.464 "md_interleave": false, 00:15:51.464 "dif_type": 0, 00:15:51.464 "assigned_rate_limits": { 00:15:51.464 "rw_ios_per_sec": 0, 00:15:51.464 "rw_mbytes_per_sec": 0, 00:15:51.464 "r_mbytes_per_sec": 0, 00:15:51.464 "w_mbytes_per_sec": 0 00:15:51.464 }, 00:15:51.464 "claimed": false, 00:15:51.464 "zoned": false, 00:15:51.464 "supported_io_types": { 00:15:51.464 "read": true, 00:15:51.464 "write": true, 00:15:51.464 "unmap": false, 00:15:51.464 "flush": false, 00:15:51.464 "reset": true, 00:15:51.464 "nvme_admin": false, 00:15:51.465 "nvme_io": false, 00:15:51.465 "nvme_io_md": false, 00:15:51.465 "write_zeroes": true, 00:15:51.465 "zcopy": false, 00:15:51.465 "get_zone_info": false, 00:15:51.465 "zone_management": false, 00:15:51.465 "zone_append": false, 00:15:51.465 "compare": false, 00:15:51.465 "compare_and_write": false, 00:15:51.465 "abort": false, 00:15:51.465 "seek_hole": false, 00:15:51.465 "seek_data": false, 00:15:51.465 "copy": false, 00:15:51.465 "nvme_iov_md": false 00:15:51.465 }, 00:15:51.465 "memory_domains": [ 00:15:51.465 { 00:15:51.465 "dma_device_id": "system", 00:15:51.465 "dma_device_type": 1 00:15:51.465 }, 00:15:51.465 { 00:15:51.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.465 "dma_device_type": 2 00:15:51.465 }, 00:15:51.465 { 00:15:51.465 "dma_device_id": "system", 00:15:51.465 "dma_device_type": 1 00:15:51.465 }, 00:15:51.465 { 00:15:51.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.465 "dma_device_type": 2 00:15:51.465 } 00:15:51.465 ], 00:15:51.465 "driver_specific": { 00:15:51.465 "raid": { 00:15:51.465 "uuid": "01dbfdfb-0128-4ad9-bc21-04faee6064b3", 00:15:51.465 "strip_size_kb": 0, 00:15:51.465 "state": "online", 00:15:51.465 "raid_level": "raid1", 00:15:51.465 "superblock": true, 00:15:51.465 "num_base_bdevs": 2, 00:15:51.465 "num_base_bdevs_discovered": 2, 00:15:51.465 "num_base_bdevs_operational": 2, 00:15:51.465 "base_bdevs_list": [ 00:15:51.465 { 00:15:51.465 "name": "pt1", 00:15:51.465 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:51.465 "is_configured": true, 00:15:51.465 "data_offset": 256, 00:15:51.465 "data_size": 7936 00:15:51.465 }, 00:15:51.465 { 00:15:51.465 "name": "pt2", 00:15:51.465 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.465 "is_configured": true, 00:15:51.465 "data_offset": 256, 00:15:51.465 "data_size": 7936 00:15:51.465 } 00:15:51.465 ] 00:15:51.465 } 00:15:51.465 } 00:15:51.465 }' 00:15:51.465 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:51.465 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:51.465 pt2' 00:15:51.465 01:17:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:51.465 [2024-10-15 01:17:04.120019] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 01dbfdfb-0128-4ad9-bc21-04faee6064b3 '!=' 01dbfdfb-0128-4ad9-bc21-04faee6064b3 ']' 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.465 [2024-10-15 01:17:04.167723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.465 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.725 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.725 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.725 "name": "raid_bdev1", 00:15:51.725 "uuid": "01dbfdfb-0128-4ad9-bc21-04faee6064b3", 00:15:51.725 "strip_size_kb": 0, 00:15:51.725 "state": "online", 00:15:51.725 "raid_level": "raid1", 00:15:51.725 "superblock": true, 00:15:51.725 "num_base_bdevs": 2, 00:15:51.725 "num_base_bdevs_discovered": 1, 00:15:51.725 "num_base_bdevs_operational": 1, 00:15:51.725 "base_bdevs_list": [ 00:15:51.725 { 00:15:51.725 "name": null, 00:15:51.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.725 "is_configured": false, 00:15:51.725 "data_offset": 0, 00:15:51.725 "data_size": 7936 00:15:51.725 }, 00:15:51.725 { 00:15:51.725 "name": "pt2", 00:15:51.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.725 "is_configured": true, 00:15:51.725 "data_offset": 256, 00:15:51.725 "data_size": 7936 00:15:51.725 } 00:15:51.725 ] 00:15:51.725 }' 00:15:51.725 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.725 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.985 [2024-10-15 01:17:04.578952] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.985 [2024-10-15 01:17:04.579036] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.985 [2024-10-15 01:17:04.579136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.985 [2024-10-15 01:17:04.579210] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.985 [2024-10-15 01:17:04.579253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.985 [2024-10-15 01:17:04.638817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:51.985 [2024-10-15 01:17:04.638922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.985 [2024-10-15 01:17:04.638961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:51.985 [2024-10-15 01:17:04.638988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.985 [2024-10-15 01:17:04.640991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.985 [2024-10-15 01:17:04.641077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:51.985 [2024-10-15 01:17:04.641155] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:51.985 [2024-10-15 01:17:04.641220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.985 [2024-10-15 01:17:04.641321] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:51.985 [2024-10-15 01:17:04.641362] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:51.985 [2024-10-15 01:17:04.641461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:51.985 [2024-10-15 01:17:04.641573] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:51.985 [2024-10-15 01:17:04.641611] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:15:51.985 [2024-10-15 01:17:04.641719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.985 pt2 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.985 "name": "raid_bdev1", 00:15:51.985 "uuid": "01dbfdfb-0128-4ad9-bc21-04faee6064b3", 00:15:51.985 "strip_size_kb": 0, 00:15:51.985 "state": "online", 00:15:51.985 "raid_level": "raid1", 00:15:51.985 "superblock": true, 00:15:51.985 "num_base_bdevs": 2, 00:15:51.985 "num_base_bdevs_discovered": 1, 00:15:51.985 "num_base_bdevs_operational": 1, 00:15:51.985 "base_bdevs_list": [ 00:15:51.985 { 00:15:51.985 "name": null, 00:15:51.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.985 "is_configured": false, 00:15:51.985 "data_offset": 256, 00:15:51.985 "data_size": 7936 00:15:51.985 }, 00:15:51.985 { 00:15:51.985 "name": "pt2", 00:15:51.985 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.985 "is_configured": true, 00:15:51.985 "data_offset": 256, 00:15:51.985 "data_size": 7936 00:15:51.985 } 00:15:51.985 ] 00:15:51.985 }' 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.985 01:17:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.556 [2024-10-15 01:17:05.074106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:52.556 [2024-10-15 01:17:05.074213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.556 [2024-10-15 01:17:05.074327] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.556 [2024-10-15 01:17:05.074396] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.556 [2024-10-15 01:17:05.074466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.556 [2024-10-15 01:17:05.138025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:52.556 [2024-10-15 01:17:05.138096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.556 [2024-10-15 01:17:05.138119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:15:52.556 [2024-10-15 01:17:05.138133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.556 [2024-10-15 01:17:05.140140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.556 [2024-10-15 01:17:05.140215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:52.556 [2024-10-15 01:17:05.140276] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:52.556 [2024-10-15 01:17:05.140314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:52.556 [2024-10-15 01:17:05.140435] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:52.556 [2024-10-15 01:17:05.140488] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:52.556 [2024-10-15 01:17:05.140513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:15:52.556 [2024-10-15 01:17:05.140548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:52.556 [2024-10-15 01:17:05.140610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:15:52.556 [2024-10-15 01:17:05.140621] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:52.556 [2024-10-15 01:17:05.140684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:52.556 [2024-10-15 01:17:05.140760] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:15:52.556 [2024-10-15 01:17:05.140767] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:15:52.556 [2024-10-15 01:17:05.140840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.556 pt1 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.556 "name": "raid_bdev1", 00:15:52.556 "uuid": "01dbfdfb-0128-4ad9-bc21-04faee6064b3", 00:15:52.556 "strip_size_kb": 0, 00:15:52.556 "state": "online", 00:15:52.556 "raid_level": "raid1", 00:15:52.556 "superblock": true, 00:15:52.556 "num_base_bdevs": 2, 00:15:52.556 "num_base_bdevs_discovered": 1, 00:15:52.556 "num_base_bdevs_operational": 1, 00:15:52.556 "base_bdevs_list": [ 00:15:52.556 { 00:15:52.556 "name": null, 00:15:52.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.556 "is_configured": false, 00:15:52.556 "data_offset": 256, 00:15:52.556 "data_size": 7936 00:15:52.556 }, 00:15:52.556 { 00:15:52.556 "name": "pt2", 00:15:52.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.556 "is_configured": true, 00:15:52.556 "data_offset": 256, 00:15:52.556 "data_size": 7936 00:15:52.556 } 00:15:52.556 ] 00:15:52.556 }' 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.556 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:53.127 [2024-10-15 01:17:05.645433] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 01dbfdfb-0128-4ad9-bc21-04faee6064b3 '!=' 01dbfdfb-0128-4ad9-bc21-04faee6064b3 ']' 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 97497 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97497 ']' 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 97497 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97497 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97497' 00:15:53.127 killing process with pid 97497 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 97497 00:15:53.127 [2024-10-15 01:17:05.734481] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:53.127 [2024-10-15 01:17:05.734646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.127 [2024-10-15 01:17:05.734725] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.127 [2024-10-15 01:17:05.734770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:15:53.127 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 97497 00:15:53.127 [2024-10-15 01:17:05.759348] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.389 01:17:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:15:53.389 00:15:53.389 real 0m4.951s 00:15:53.389 user 0m8.089s 00:15:53.389 sys 0m1.060s 00:15:53.389 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:53.389 01:17:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.389 ************************************ 00:15:53.389 END TEST raid_superblock_test_md_separate 00:15:53.389 ************************************ 00:15:53.389 01:17:06 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:15:53.389 01:17:06 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:15:53.389 01:17:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:53.389 01:17:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:53.389 01:17:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.389 ************************************ 00:15:53.389 START TEST raid_rebuild_test_sb_md_separate 00:15:53.389 ************************************ 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=97814 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 97814 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97814 ']' 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:53.389 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.654 [2024-10-15 01:17:06.135837] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:15:53.654 [2024-10-15 01:17:06.136044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97814 ] 00:15:53.654 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:53.654 Zero copy mechanism will not be used. 00:15:53.654 [2024-10-15 01:17:06.279156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.654 [2024-10-15 01:17:06.308269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.654 [2024-10-15 01:17:06.351401] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.654 [2024-10-15 01:17:06.351520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.595 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:54.595 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:54.595 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:54.595 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:15:54.595 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.595 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.595 BaseBdev1_malloc 00:15:54.595 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.595 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:54.595 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.595 01:17:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.595 [2024-10-15 01:17:06.999047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:54.595 [2024-10-15 01:17:06.999203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.595 [2024-10-15 01:17:06.999254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:54.595 [2024-10-15 01:17:06.999284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.595 [2024-10-15 01:17:07.001395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.595 [2024-10-15 01:17:07.001481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:54.595 BaseBdev1 00:15:54.595 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.596 BaseBdev2_malloc 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.596 [2024-10-15 01:17:07.028446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:54.596 [2024-10-15 01:17:07.028582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.596 [2024-10-15 01:17:07.028614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:54.596 [2024-10-15 01:17:07.028623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.596 [2024-10-15 01:17:07.030585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.596 [2024-10-15 01:17:07.030619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:54.596 BaseBdev2 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.596 spare_malloc 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.596 spare_delay 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.596 [2024-10-15 01:17:07.081882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:54.596 [2024-10-15 01:17:07.081990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.596 [2024-10-15 01:17:07.082069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:54.596 [2024-10-15 01:17:07.082080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.596 [2024-10-15 01:17:07.084301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.596 [2024-10-15 01:17:07.084334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:54.596 spare 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.596 [2024-10-15 01:17:07.093917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.596 [2024-10-15 01:17:07.095839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:54.596 [2024-10-15 01:17:07.096024] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:54.596 [2024-10-15 01:17:07.096044] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:54.596 [2024-10-15 01:17:07.096155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:54.596 [2024-10-15 01:17:07.096307] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:54.596 [2024-10-15 01:17:07.096320] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:54.596 [2024-10-15 01:17:07.096420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.596 "name": "raid_bdev1", 00:15:54.596 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:15:54.596 "strip_size_kb": 0, 00:15:54.596 "state": "online", 00:15:54.596 "raid_level": "raid1", 00:15:54.596 "superblock": true, 00:15:54.596 "num_base_bdevs": 2, 00:15:54.596 "num_base_bdevs_discovered": 2, 00:15:54.596 "num_base_bdevs_operational": 2, 00:15:54.596 "base_bdevs_list": [ 00:15:54.596 { 00:15:54.596 "name": "BaseBdev1", 00:15:54.596 "uuid": "3044f7bb-c839-5735-a29c-986b03cfdc66", 00:15:54.596 "is_configured": true, 00:15:54.596 "data_offset": 256, 00:15:54.596 "data_size": 7936 00:15:54.596 }, 00:15:54.596 { 00:15:54.596 "name": "BaseBdev2", 00:15:54.596 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:15:54.596 "is_configured": true, 00:15:54.596 "data_offset": 256, 00:15:54.596 "data_size": 7936 00:15:54.596 } 00:15:54.596 ] 00:15:54.596 }' 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.596 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.856 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:54.856 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:54.856 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.856 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.856 [2024-10-15 01:17:07.565418] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.115 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.115 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:55.115 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:55.115 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.115 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.115 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.115 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.115 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:55.115 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:55.115 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:55.115 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:55.115 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:55.115 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:55.115 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:55.115 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:55.115 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:55.115 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:55.116 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:15:55.116 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:55.116 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:55.116 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:55.116 [2024-10-15 01:17:07.828736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:55.374 /dev/nbd0 00:15:55.374 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:55.374 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:55.374 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:55.374 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:15:55.374 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:55.374 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:55.374 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:55.374 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:15:55.374 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:55.374 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:55.374 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:55.374 1+0 records in 00:15:55.374 1+0 records out 00:15:55.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375169 s, 10.9 MB/s 00:15:55.374 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.374 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:15:55.374 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.374 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:55.374 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:15:55.374 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:55.374 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:55.374 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:55.374 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:55.374 01:17:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:55.940 7936+0 records in 00:15:55.940 7936+0 records out 00:15:55.940 32505856 bytes (33 MB, 31 MiB) copied, 0.563832 s, 57.7 MB/s 00:15:55.940 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:55.940 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:55.940 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:55.940 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:55.940 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:15:55.940 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:55.940 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:56.201 [2024-10-15 01:17:08.682293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.201 [2024-10-15 01:17:08.698402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.201 "name": "raid_bdev1", 00:15:56.201 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:15:56.201 "strip_size_kb": 0, 00:15:56.201 "state": "online", 00:15:56.201 "raid_level": "raid1", 00:15:56.201 "superblock": true, 00:15:56.201 "num_base_bdevs": 2, 00:15:56.201 "num_base_bdevs_discovered": 1, 00:15:56.201 "num_base_bdevs_operational": 1, 00:15:56.201 "base_bdevs_list": [ 00:15:56.201 { 00:15:56.201 "name": null, 00:15:56.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.201 "is_configured": false, 00:15:56.201 "data_offset": 0, 00:15:56.201 "data_size": 7936 00:15:56.201 }, 00:15:56.201 { 00:15:56.201 "name": "BaseBdev2", 00:15:56.201 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:15:56.201 "is_configured": true, 00:15:56.201 "data_offset": 256, 00:15:56.201 "data_size": 7936 00:15:56.201 } 00:15:56.201 ] 00:15:56.201 }' 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.201 01:17:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.461 01:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:56.461 01:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.461 01:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.721 [2024-10-15 01:17:09.189552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:56.721 [2024-10-15 01:17:09.192325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:15:56.721 [2024-10-15 01:17:09.194266] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:56.721 01:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.721 01:17:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:57.663 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.663 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.663 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.663 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.663 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.663 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.663 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.663 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.663 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.663 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.663 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.663 "name": "raid_bdev1", 00:15:57.663 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:15:57.663 "strip_size_kb": 0, 00:15:57.663 "state": "online", 00:15:57.663 "raid_level": "raid1", 00:15:57.663 "superblock": true, 00:15:57.663 "num_base_bdevs": 2, 00:15:57.663 "num_base_bdevs_discovered": 2, 00:15:57.663 "num_base_bdevs_operational": 2, 00:15:57.663 "process": { 00:15:57.663 "type": "rebuild", 00:15:57.663 "target": "spare", 00:15:57.663 "progress": { 00:15:57.663 "blocks": 2560, 00:15:57.663 "percent": 32 00:15:57.663 } 00:15:57.663 }, 00:15:57.663 "base_bdevs_list": [ 00:15:57.663 { 00:15:57.663 "name": "spare", 00:15:57.663 "uuid": "b59ae326-30d6-5c13-ac76-98d0e5936745", 00:15:57.663 "is_configured": true, 00:15:57.663 "data_offset": 256, 00:15:57.663 "data_size": 7936 00:15:57.663 }, 00:15:57.663 { 00:15:57.663 "name": "BaseBdev2", 00:15:57.663 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:15:57.663 "is_configured": true, 00:15:57.663 "data_offset": 256, 00:15:57.663 "data_size": 7936 00:15:57.663 } 00:15:57.663 ] 00:15:57.663 }' 00:15:57.663 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.663 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.663 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.663 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.663 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:57.663 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.663 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.663 [2024-10-15 01:17:10.357152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:57.923 [2024-10-15 01:17:10.400130] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:57.923 [2024-10-15 01:17:10.400233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.923 [2024-10-15 01:17:10.400255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:57.923 [2024-10-15 01:17:10.400292] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:57.923 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.923 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:57.923 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.923 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.923 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.923 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.923 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:57.923 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.923 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.923 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.923 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.923 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.923 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.923 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.923 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.923 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.923 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.923 "name": "raid_bdev1", 00:15:57.923 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:15:57.923 "strip_size_kb": 0, 00:15:57.923 "state": "online", 00:15:57.923 "raid_level": "raid1", 00:15:57.923 "superblock": true, 00:15:57.923 "num_base_bdevs": 2, 00:15:57.923 "num_base_bdevs_discovered": 1, 00:15:57.923 "num_base_bdevs_operational": 1, 00:15:57.923 "base_bdevs_list": [ 00:15:57.923 { 00:15:57.923 "name": null, 00:15:57.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.923 "is_configured": false, 00:15:57.923 "data_offset": 0, 00:15:57.923 "data_size": 7936 00:15:57.923 }, 00:15:57.923 { 00:15:57.923 "name": "BaseBdev2", 00:15:57.923 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:15:57.923 "is_configured": true, 00:15:57.923 "data_offset": 256, 00:15:57.923 "data_size": 7936 00:15:57.923 } 00:15:57.923 ] 00:15:57.923 }' 00:15:57.923 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.923 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.183 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:58.183 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.183 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:58.183 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:58.183 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.183 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.183 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.183 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.183 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.183 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.443 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.443 "name": "raid_bdev1", 00:15:58.443 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:15:58.443 "strip_size_kb": 0, 00:15:58.443 "state": "online", 00:15:58.443 "raid_level": "raid1", 00:15:58.443 "superblock": true, 00:15:58.443 "num_base_bdevs": 2, 00:15:58.443 "num_base_bdevs_discovered": 1, 00:15:58.443 "num_base_bdevs_operational": 1, 00:15:58.443 "base_bdevs_list": [ 00:15:58.443 { 00:15:58.443 "name": null, 00:15:58.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.443 "is_configured": false, 00:15:58.443 "data_offset": 0, 00:15:58.443 "data_size": 7936 00:15:58.443 }, 00:15:58.443 { 00:15:58.443 "name": "BaseBdev2", 00:15:58.443 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:15:58.443 "is_configured": true, 00:15:58.443 "data_offset": 256, 00:15:58.443 "data_size": 7936 00:15:58.443 } 00:15:58.443 ] 00:15:58.443 }' 00:15:58.443 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.443 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:58.443 01:17:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.443 01:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:58.443 01:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:58.443 01:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.443 01:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.443 [2024-10-15 01:17:11.014905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:58.443 [2024-10-15 01:17:11.017555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:15:58.443 [2024-10-15 01:17:11.019429] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:58.443 01:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.443 01:17:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:59.384 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.384 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.384 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.384 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.384 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.384 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.384 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.384 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.384 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.384 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.384 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.384 "name": "raid_bdev1", 00:15:59.384 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:15:59.384 "strip_size_kb": 0, 00:15:59.384 "state": "online", 00:15:59.384 "raid_level": "raid1", 00:15:59.384 "superblock": true, 00:15:59.384 "num_base_bdevs": 2, 00:15:59.384 "num_base_bdevs_discovered": 2, 00:15:59.384 "num_base_bdevs_operational": 2, 00:15:59.384 "process": { 00:15:59.384 "type": "rebuild", 00:15:59.384 "target": "spare", 00:15:59.384 "progress": { 00:15:59.384 "blocks": 2560, 00:15:59.384 "percent": 32 00:15:59.384 } 00:15:59.384 }, 00:15:59.384 "base_bdevs_list": [ 00:15:59.384 { 00:15:59.384 "name": "spare", 00:15:59.384 "uuid": "b59ae326-30d6-5c13-ac76-98d0e5936745", 00:15:59.384 "is_configured": true, 00:15:59.384 "data_offset": 256, 00:15:59.384 "data_size": 7936 00:15:59.384 }, 00:15:59.384 { 00:15:59.384 "name": "BaseBdev2", 00:15:59.384 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:15:59.384 "is_configured": true, 00:15:59.384 "data_offset": 256, 00:15:59.384 "data_size": 7936 00:15:59.384 } 00:15:59.384 ] 00:15:59.384 }' 00:15:59.384 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:59.644 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=584 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.644 "name": "raid_bdev1", 00:15:59.644 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:15:59.644 "strip_size_kb": 0, 00:15:59.644 "state": "online", 00:15:59.644 "raid_level": "raid1", 00:15:59.644 "superblock": true, 00:15:59.644 "num_base_bdevs": 2, 00:15:59.644 "num_base_bdevs_discovered": 2, 00:15:59.644 "num_base_bdevs_operational": 2, 00:15:59.644 "process": { 00:15:59.644 "type": "rebuild", 00:15:59.644 "target": "spare", 00:15:59.644 "progress": { 00:15:59.644 "blocks": 2816, 00:15:59.644 "percent": 35 00:15:59.644 } 00:15:59.644 }, 00:15:59.644 "base_bdevs_list": [ 00:15:59.644 { 00:15:59.644 "name": "spare", 00:15:59.644 "uuid": "b59ae326-30d6-5c13-ac76-98d0e5936745", 00:15:59.644 "is_configured": true, 00:15:59.644 "data_offset": 256, 00:15:59.644 "data_size": 7936 00:15:59.644 }, 00:15:59.644 { 00:15:59.644 "name": "BaseBdev2", 00:15:59.644 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:15:59.644 "is_configured": true, 00:15:59.644 "data_offset": 256, 00:15:59.644 "data_size": 7936 00:15:59.644 } 00:15:59.644 ] 00:15:59.644 }' 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.644 01:17:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:01.027 01:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.027 01:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.027 01:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.027 01:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.027 01:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.027 01:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.027 01:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.027 01:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.027 01:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.027 01:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.027 01:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.027 01:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.027 "name": "raid_bdev1", 00:16:01.027 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:16:01.027 "strip_size_kb": 0, 00:16:01.027 "state": "online", 00:16:01.027 "raid_level": "raid1", 00:16:01.027 "superblock": true, 00:16:01.027 "num_base_bdevs": 2, 00:16:01.027 "num_base_bdevs_discovered": 2, 00:16:01.027 "num_base_bdevs_operational": 2, 00:16:01.027 "process": { 00:16:01.027 "type": "rebuild", 00:16:01.027 "target": "spare", 00:16:01.027 "progress": { 00:16:01.027 "blocks": 5888, 00:16:01.027 "percent": 74 00:16:01.027 } 00:16:01.027 }, 00:16:01.027 "base_bdevs_list": [ 00:16:01.027 { 00:16:01.027 "name": "spare", 00:16:01.027 "uuid": "b59ae326-30d6-5c13-ac76-98d0e5936745", 00:16:01.027 "is_configured": true, 00:16:01.027 "data_offset": 256, 00:16:01.027 "data_size": 7936 00:16:01.027 }, 00:16:01.027 { 00:16:01.027 "name": "BaseBdev2", 00:16:01.027 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:16:01.027 "is_configured": true, 00:16:01.027 "data_offset": 256, 00:16:01.027 "data_size": 7936 00:16:01.027 } 00:16:01.027 ] 00:16:01.027 }' 00:16:01.027 01:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.027 01:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.027 01:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.027 01:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.027 01:17:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:01.597 [2024-10-15 01:17:14.132894] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:01.597 [2024-10-15 01:17:14.133087] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:01.597 [2024-10-15 01:17:14.133286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.857 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.857 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.857 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.857 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.857 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.857 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.857 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.857 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.857 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.857 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.857 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.857 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.857 "name": "raid_bdev1", 00:16:01.857 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:16:01.857 "strip_size_kb": 0, 00:16:01.857 "state": "online", 00:16:01.857 "raid_level": "raid1", 00:16:01.857 "superblock": true, 00:16:01.857 "num_base_bdevs": 2, 00:16:01.857 "num_base_bdevs_discovered": 2, 00:16:01.857 "num_base_bdevs_operational": 2, 00:16:01.857 "base_bdevs_list": [ 00:16:01.857 { 00:16:01.857 "name": "spare", 00:16:01.857 "uuid": "b59ae326-30d6-5c13-ac76-98d0e5936745", 00:16:01.857 "is_configured": true, 00:16:01.857 "data_offset": 256, 00:16:01.857 "data_size": 7936 00:16:01.857 }, 00:16:01.857 { 00:16:01.857 "name": "BaseBdev2", 00:16:01.857 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:16:01.857 "is_configured": true, 00:16:01.857 "data_offset": 256, 00:16:01.857 "data_size": 7936 00:16:01.857 } 00:16:01.857 ] 00:16:01.857 }' 00:16:01.857 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.117 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:02.117 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.117 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:02.117 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:16:02.117 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:02.117 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.117 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:02.117 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:02.117 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.117 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.117 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.117 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.117 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.117 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.117 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.117 "name": "raid_bdev1", 00:16:02.117 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:16:02.117 "strip_size_kb": 0, 00:16:02.118 "state": "online", 00:16:02.118 "raid_level": "raid1", 00:16:02.118 "superblock": true, 00:16:02.118 "num_base_bdevs": 2, 00:16:02.118 "num_base_bdevs_discovered": 2, 00:16:02.118 "num_base_bdevs_operational": 2, 00:16:02.118 "base_bdevs_list": [ 00:16:02.118 { 00:16:02.118 "name": "spare", 00:16:02.118 "uuid": "b59ae326-30d6-5c13-ac76-98d0e5936745", 00:16:02.118 "is_configured": true, 00:16:02.118 "data_offset": 256, 00:16:02.118 "data_size": 7936 00:16:02.118 }, 00:16:02.118 { 00:16:02.118 "name": "BaseBdev2", 00:16:02.118 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:16:02.118 "is_configured": true, 00:16:02.118 "data_offset": 256, 00:16:02.118 "data_size": 7936 00:16:02.118 } 00:16:02.118 ] 00:16:02.118 }' 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.118 "name": "raid_bdev1", 00:16:02.118 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:16:02.118 "strip_size_kb": 0, 00:16:02.118 "state": "online", 00:16:02.118 "raid_level": "raid1", 00:16:02.118 "superblock": true, 00:16:02.118 "num_base_bdevs": 2, 00:16:02.118 "num_base_bdevs_discovered": 2, 00:16:02.118 "num_base_bdevs_operational": 2, 00:16:02.118 "base_bdevs_list": [ 00:16:02.118 { 00:16:02.118 "name": "spare", 00:16:02.118 "uuid": "b59ae326-30d6-5c13-ac76-98d0e5936745", 00:16:02.118 "is_configured": true, 00:16:02.118 "data_offset": 256, 00:16:02.118 "data_size": 7936 00:16:02.118 }, 00:16:02.118 { 00:16:02.118 "name": "BaseBdev2", 00:16:02.118 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:16:02.118 "is_configured": true, 00:16:02.118 "data_offset": 256, 00:16:02.118 "data_size": 7936 00:16:02.118 } 00:16:02.118 ] 00:16:02.118 }' 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.118 01:17:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.689 [2024-10-15 01:17:15.203330] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.689 [2024-10-15 01:17:15.203360] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:02.689 [2024-10-15 01:17:15.203451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.689 [2024-10-15 01:17:15.203523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.689 [2024-10-15 01:17:15.203537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:02.689 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:02.949 /dev/nbd0 00:16:02.949 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:02.949 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:02.949 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:02.949 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:02.949 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:02.949 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:02.949 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:02.949 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:02.949 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:02.949 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:02.949 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.949 1+0 records in 00:16:02.949 1+0 records out 00:16:02.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277345 s, 14.8 MB/s 00:16:02.949 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.949 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:02.949 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.949 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:02.949 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:02.949 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.949 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:02.949 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:03.210 /dev/nbd1 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:03.210 1+0 records in 00:16:03.210 1+0 records out 00:16:03.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250959 s, 16.3 MB/s 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.210 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:03.470 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:03.470 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:03.470 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:03.470 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.470 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.470 01:17:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:03.470 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:03.470 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.470 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.470 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.729 [2024-10-15 01:17:16.249800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:03.729 [2024-10-15 01:17:16.249866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.729 [2024-10-15 01:17:16.249887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:03.729 [2024-10-15 01:17:16.249899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.729 [2024-10-15 01:17:16.251878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.729 [2024-10-15 01:17:16.251981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:03.729 [2024-10-15 01:17:16.252067] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:03.729 [2024-10-15 01:17:16.252110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:03.729 [2024-10-15 01:17:16.252256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.729 spare 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.729 [2024-10-15 01:17:16.352172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:16:03.729 [2024-10-15 01:17:16.352239] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:03.729 [2024-10-15 01:17:16.352416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:16:03.729 [2024-10-15 01:17:16.352570] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:16:03.729 [2024-10-15 01:17:16.352587] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:16:03.729 [2024-10-15 01:17:16.352710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.729 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.729 "name": "raid_bdev1", 00:16:03.729 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:16:03.729 "strip_size_kb": 0, 00:16:03.729 "state": "online", 00:16:03.729 "raid_level": "raid1", 00:16:03.729 "superblock": true, 00:16:03.729 "num_base_bdevs": 2, 00:16:03.729 "num_base_bdevs_discovered": 2, 00:16:03.729 "num_base_bdevs_operational": 2, 00:16:03.729 "base_bdevs_list": [ 00:16:03.729 { 00:16:03.729 "name": "spare", 00:16:03.729 "uuid": "b59ae326-30d6-5c13-ac76-98d0e5936745", 00:16:03.729 "is_configured": true, 00:16:03.729 "data_offset": 256, 00:16:03.729 "data_size": 7936 00:16:03.729 }, 00:16:03.729 { 00:16:03.729 "name": "BaseBdev2", 00:16:03.729 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:16:03.729 "is_configured": true, 00:16:03.729 "data_offset": 256, 00:16:03.729 "data_size": 7936 00:16:03.729 } 00:16:03.729 ] 00:16:03.730 }' 00:16:03.730 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.730 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.298 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:04.298 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.298 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:04.298 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:04.298 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.298 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.299 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.299 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.299 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.299 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.299 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.299 "name": "raid_bdev1", 00:16:04.299 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:16:04.299 "strip_size_kb": 0, 00:16:04.299 "state": "online", 00:16:04.299 "raid_level": "raid1", 00:16:04.299 "superblock": true, 00:16:04.299 "num_base_bdevs": 2, 00:16:04.299 "num_base_bdevs_discovered": 2, 00:16:04.299 "num_base_bdevs_operational": 2, 00:16:04.299 "base_bdevs_list": [ 00:16:04.299 { 00:16:04.299 "name": "spare", 00:16:04.299 "uuid": "b59ae326-30d6-5c13-ac76-98d0e5936745", 00:16:04.299 "is_configured": true, 00:16:04.299 "data_offset": 256, 00:16:04.299 "data_size": 7936 00:16:04.299 }, 00:16:04.299 { 00:16:04.299 "name": "BaseBdev2", 00:16:04.299 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:16:04.299 "is_configured": true, 00:16:04.299 "data_offset": 256, 00:16:04.299 "data_size": 7936 00:16:04.299 } 00:16:04.299 ] 00:16:04.299 }' 00:16:04.299 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.299 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:04.299 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.299 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:04.299 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:04.299 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.299 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.299 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.299 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.299 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.299 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:04.299 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.299 01:17:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.299 [2024-10-15 01:17:17.004579] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:04.299 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.299 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:04.299 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.299 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.299 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.299 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.299 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:04.299 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.299 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.299 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.299 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.299 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.299 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.299 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.299 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.558 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.558 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.558 "name": "raid_bdev1", 00:16:04.558 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:16:04.558 "strip_size_kb": 0, 00:16:04.558 "state": "online", 00:16:04.558 "raid_level": "raid1", 00:16:04.558 "superblock": true, 00:16:04.558 "num_base_bdevs": 2, 00:16:04.558 "num_base_bdevs_discovered": 1, 00:16:04.558 "num_base_bdevs_operational": 1, 00:16:04.558 "base_bdevs_list": [ 00:16:04.558 { 00:16:04.559 "name": null, 00:16:04.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.559 "is_configured": false, 00:16:04.559 "data_offset": 0, 00:16:04.559 "data_size": 7936 00:16:04.559 }, 00:16:04.559 { 00:16:04.559 "name": "BaseBdev2", 00:16:04.559 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:16:04.559 "is_configured": true, 00:16:04.559 "data_offset": 256, 00:16:04.559 "data_size": 7936 00:16:04.559 } 00:16:04.559 ] 00:16:04.559 }' 00:16:04.559 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.559 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.819 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:04.819 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.819 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.819 [2024-10-15 01:17:17.467985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:04.819 [2024-10-15 01:17:17.468294] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:04.819 [2024-10-15 01:17:17.468361] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:04.819 [2024-10-15 01:17:17.468408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:04.819 [2024-10-15 01:17:17.470876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:16:04.819 [2024-10-15 01:17:17.472801] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:04.819 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.819 01:17:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:05.760 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.760 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.760 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.760 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.760 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.019 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.019 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.019 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.019 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.019 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.019 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.019 "name": "raid_bdev1", 00:16:06.019 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:16:06.019 "strip_size_kb": 0, 00:16:06.019 "state": "online", 00:16:06.019 "raid_level": "raid1", 00:16:06.019 "superblock": true, 00:16:06.019 "num_base_bdevs": 2, 00:16:06.019 "num_base_bdevs_discovered": 2, 00:16:06.019 "num_base_bdevs_operational": 2, 00:16:06.019 "process": { 00:16:06.019 "type": "rebuild", 00:16:06.019 "target": "spare", 00:16:06.019 "progress": { 00:16:06.019 "blocks": 2560, 00:16:06.019 "percent": 32 00:16:06.019 } 00:16:06.019 }, 00:16:06.019 "base_bdevs_list": [ 00:16:06.019 { 00:16:06.019 "name": "spare", 00:16:06.019 "uuid": "b59ae326-30d6-5c13-ac76-98d0e5936745", 00:16:06.019 "is_configured": true, 00:16:06.019 "data_offset": 256, 00:16:06.019 "data_size": 7936 00:16:06.019 }, 00:16:06.019 { 00:16:06.019 "name": "BaseBdev2", 00:16:06.019 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:16:06.019 "is_configured": true, 00:16:06.019 "data_offset": 256, 00:16:06.019 "data_size": 7936 00:16:06.019 } 00:16:06.019 ] 00:16:06.019 }' 00:16:06.019 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.019 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.019 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.019 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.019 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:06.019 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.019 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.019 [2024-10-15 01:17:18.632417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:06.020 [2024-10-15 01:17:18.677950] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:06.020 [2024-10-15 01:17:18.678016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.020 [2024-10-15 01:17:18.678034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:06.020 [2024-10-15 01:17:18.678043] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:06.020 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.020 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:06.020 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.020 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.020 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.020 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.020 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:06.020 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.020 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.020 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.020 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.020 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.020 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.020 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.020 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.020 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.020 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.020 "name": "raid_bdev1", 00:16:06.020 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:16:06.020 "strip_size_kb": 0, 00:16:06.020 "state": "online", 00:16:06.020 "raid_level": "raid1", 00:16:06.020 "superblock": true, 00:16:06.020 "num_base_bdevs": 2, 00:16:06.020 "num_base_bdevs_discovered": 1, 00:16:06.020 "num_base_bdevs_operational": 1, 00:16:06.020 "base_bdevs_list": [ 00:16:06.020 { 00:16:06.020 "name": null, 00:16:06.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.020 "is_configured": false, 00:16:06.020 "data_offset": 0, 00:16:06.020 "data_size": 7936 00:16:06.020 }, 00:16:06.020 { 00:16:06.020 "name": "BaseBdev2", 00:16:06.020 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:16:06.020 "is_configured": true, 00:16:06.020 "data_offset": 256, 00:16:06.020 "data_size": 7936 00:16:06.020 } 00:16:06.020 ] 00:16:06.020 }' 00:16:06.020 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.020 01:17:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.589 01:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:06.589 01:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.589 01:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.589 [2024-10-15 01:17:19.136637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:06.589 [2024-10-15 01:17:19.136775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.589 [2024-10-15 01:17:19.136824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:06.589 [2024-10-15 01:17:19.136863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.589 [2024-10-15 01:17:19.137110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.589 [2024-10-15 01:17:19.137163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:06.589 [2024-10-15 01:17:19.137269] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:06.589 [2024-10-15 01:17:19.137309] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:06.589 [2024-10-15 01:17:19.137354] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:06.589 [2024-10-15 01:17:19.137407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:06.589 [2024-10-15 01:17:19.139885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:16:06.589 [2024-10-15 01:17:19.141831] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:06.589 spare 00:16:06.589 01:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.589 01:17:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:07.529 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.529 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.529 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.529 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.529 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.530 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.530 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.530 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.530 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.530 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.530 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.530 "name": "raid_bdev1", 00:16:07.530 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:16:07.530 "strip_size_kb": 0, 00:16:07.530 "state": "online", 00:16:07.530 "raid_level": "raid1", 00:16:07.530 "superblock": true, 00:16:07.530 "num_base_bdevs": 2, 00:16:07.530 "num_base_bdevs_discovered": 2, 00:16:07.530 "num_base_bdevs_operational": 2, 00:16:07.530 "process": { 00:16:07.530 "type": "rebuild", 00:16:07.530 "target": "spare", 00:16:07.530 "progress": { 00:16:07.530 "blocks": 2560, 00:16:07.530 "percent": 32 00:16:07.530 } 00:16:07.530 }, 00:16:07.530 "base_bdevs_list": [ 00:16:07.530 { 00:16:07.530 "name": "spare", 00:16:07.530 "uuid": "b59ae326-30d6-5c13-ac76-98d0e5936745", 00:16:07.530 "is_configured": true, 00:16:07.530 "data_offset": 256, 00:16:07.530 "data_size": 7936 00:16:07.530 }, 00:16:07.530 { 00:16:07.530 "name": "BaseBdev2", 00:16:07.530 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:16:07.530 "is_configured": true, 00:16:07.530 "data_offset": 256, 00:16:07.530 "data_size": 7936 00:16:07.530 } 00:16:07.530 ] 00:16:07.530 }' 00:16:07.530 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.790 [2024-10-15 01:17:20.312931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.790 [2024-10-15 01:17:20.346994] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:07.790 [2024-10-15 01:17:20.347100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.790 [2024-10-15 01:17:20.347117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.790 [2024-10-15 01:17:20.347126] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.790 "name": "raid_bdev1", 00:16:07.790 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:16:07.790 "strip_size_kb": 0, 00:16:07.790 "state": "online", 00:16:07.790 "raid_level": "raid1", 00:16:07.790 "superblock": true, 00:16:07.790 "num_base_bdevs": 2, 00:16:07.790 "num_base_bdevs_discovered": 1, 00:16:07.790 "num_base_bdevs_operational": 1, 00:16:07.790 "base_bdevs_list": [ 00:16:07.790 { 00:16:07.790 "name": null, 00:16:07.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.790 "is_configured": false, 00:16:07.790 "data_offset": 0, 00:16:07.790 "data_size": 7936 00:16:07.790 }, 00:16:07.790 { 00:16:07.790 "name": "BaseBdev2", 00:16:07.790 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:16:07.790 "is_configured": true, 00:16:07.790 "data_offset": 256, 00:16:07.790 "data_size": 7936 00:16:07.790 } 00:16:07.790 ] 00:16:07.790 }' 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.790 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.359 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:08.359 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.359 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:08.359 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:08.359 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.359 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.359 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.359 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.359 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.359 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.359 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.359 "name": "raid_bdev1", 00:16:08.359 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:16:08.359 "strip_size_kb": 0, 00:16:08.359 "state": "online", 00:16:08.359 "raid_level": "raid1", 00:16:08.359 "superblock": true, 00:16:08.359 "num_base_bdevs": 2, 00:16:08.359 "num_base_bdevs_discovered": 1, 00:16:08.359 "num_base_bdevs_operational": 1, 00:16:08.359 "base_bdevs_list": [ 00:16:08.359 { 00:16:08.359 "name": null, 00:16:08.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.359 "is_configured": false, 00:16:08.359 "data_offset": 0, 00:16:08.359 "data_size": 7936 00:16:08.359 }, 00:16:08.359 { 00:16:08.359 "name": "BaseBdev2", 00:16:08.359 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:16:08.359 "is_configured": true, 00:16:08.359 "data_offset": 256, 00:16:08.359 "data_size": 7936 00:16:08.359 } 00:16:08.359 ] 00:16:08.359 }' 00:16:08.360 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.360 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:08.360 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.360 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:08.360 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:08.360 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.360 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.360 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.360 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:08.360 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.360 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.360 [2024-10-15 01:17:20.945506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:08.360 [2024-10-15 01:17:20.945601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.360 [2024-10-15 01:17:20.945623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:08.360 [2024-10-15 01:17:20.945633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.360 [2024-10-15 01:17:20.945835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.360 [2024-10-15 01:17:20.945852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:08.360 [2024-10-15 01:17:20.945909] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:08.360 [2024-10-15 01:17:20.945935] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:08.360 [2024-10-15 01:17:20.945945] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:08.360 [2024-10-15 01:17:20.945970] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:08.360 BaseBdev1 00:16:08.360 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.360 01:17:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:09.332 01:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:09.332 01:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.332 01:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.332 01:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.332 01:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.332 01:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:09.332 01:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.332 01:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.332 01:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.332 01:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.332 01:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.332 01:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.332 01:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.332 01:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:09.332 01:17:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.332 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.332 "name": "raid_bdev1", 00:16:09.332 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:16:09.332 "strip_size_kb": 0, 00:16:09.332 "state": "online", 00:16:09.332 "raid_level": "raid1", 00:16:09.332 "superblock": true, 00:16:09.332 "num_base_bdevs": 2, 00:16:09.332 "num_base_bdevs_discovered": 1, 00:16:09.332 "num_base_bdevs_operational": 1, 00:16:09.332 "base_bdevs_list": [ 00:16:09.332 { 00:16:09.332 "name": null, 00:16:09.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.332 "is_configured": false, 00:16:09.332 "data_offset": 0, 00:16:09.332 "data_size": 7936 00:16:09.332 }, 00:16:09.332 { 00:16:09.332 "name": "BaseBdev2", 00:16:09.332 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:16:09.332 "is_configured": true, 00:16:09.332 "data_offset": 256, 00:16:09.332 "data_size": 7936 00:16:09.332 } 00:16:09.332 ] 00:16:09.332 }' 00:16:09.333 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.333 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.903 "name": "raid_bdev1", 00:16:09.903 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:16:09.903 "strip_size_kb": 0, 00:16:09.903 "state": "online", 00:16:09.903 "raid_level": "raid1", 00:16:09.903 "superblock": true, 00:16:09.903 "num_base_bdevs": 2, 00:16:09.903 "num_base_bdevs_discovered": 1, 00:16:09.903 "num_base_bdevs_operational": 1, 00:16:09.903 "base_bdevs_list": [ 00:16:09.903 { 00:16:09.903 "name": null, 00:16:09.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.903 "is_configured": false, 00:16:09.903 "data_offset": 0, 00:16:09.903 "data_size": 7936 00:16:09.903 }, 00:16:09.903 { 00:16:09.903 "name": "BaseBdev2", 00:16:09.903 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:16:09.903 "is_configured": true, 00:16:09.903 "data_offset": 256, 00:16:09.903 "data_size": 7936 00:16:09.903 } 00:16:09.903 ] 00:16:09.903 }' 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:09.903 [2024-10-15 01:17:22.603357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.903 [2024-10-15 01:17:22.603578] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:09.903 [2024-10-15 01:17:22.603635] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:09.903 request: 00:16:09.903 { 00:16:09.903 "base_bdev": "BaseBdev1", 00:16:09.903 "raid_bdev": "raid_bdev1", 00:16:09.903 "method": "bdev_raid_add_base_bdev", 00:16:09.903 "req_id": 1 00:16:09.903 } 00:16:09.903 Got JSON-RPC error response 00:16:09.903 response: 00:16:09.903 { 00:16:09.903 "code": -22, 00:16:09.903 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:09.903 } 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:09.903 01:17:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:11.286 01:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:11.286 01:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.286 01:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.286 01:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.286 01:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.286 01:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:11.286 01:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.286 01:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.286 01:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.286 01:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.286 01:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.286 01:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.286 01:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.286 01:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.286 01:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.286 01:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.286 "name": "raid_bdev1", 00:16:11.286 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:16:11.286 "strip_size_kb": 0, 00:16:11.286 "state": "online", 00:16:11.286 "raid_level": "raid1", 00:16:11.286 "superblock": true, 00:16:11.286 "num_base_bdevs": 2, 00:16:11.286 "num_base_bdevs_discovered": 1, 00:16:11.286 "num_base_bdevs_operational": 1, 00:16:11.286 "base_bdevs_list": [ 00:16:11.286 { 00:16:11.286 "name": null, 00:16:11.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.286 "is_configured": false, 00:16:11.286 "data_offset": 0, 00:16:11.286 "data_size": 7936 00:16:11.286 }, 00:16:11.286 { 00:16:11.286 "name": "BaseBdev2", 00:16:11.286 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:16:11.286 "is_configured": true, 00:16:11.286 "data_offset": 256, 00:16:11.286 "data_size": 7936 00:16:11.286 } 00:16:11.286 ] 00:16:11.286 }' 00:16:11.286 01:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.286 01:17:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.547 "name": "raid_bdev1", 00:16:11.547 "uuid": "f4ee8bf1-5a94-484a-8953-79db24261bbe", 00:16:11.547 "strip_size_kb": 0, 00:16:11.547 "state": "online", 00:16:11.547 "raid_level": "raid1", 00:16:11.547 "superblock": true, 00:16:11.547 "num_base_bdevs": 2, 00:16:11.547 "num_base_bdevs_discovered": 1, 00:16:11.547 "num_base_bdevs_operational": 1, 00:16:11.547 "base_bdevs_list": [ 00:16:11.547 { 00:16:11.547 "name": null, 00:16:11.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.547 "is_configured": false, 00:16:11.547 "data_offset": 0, 00:16:11.547 "data_size": 7936 00:16:11.547 }, 00:16:11.547 { 00:16:11.547 "name": "BaseBdev2", 00:16:11.547 "uuid": "9f14440b-2540-5e6b-8f36-d7efba7d9aba", 00:16:11.547 "is_configured": true, 00:16:11.547 "data_offset": 256, 00:16:11.547 "data_size": 7936 00:16:11.547 } 00:16:11.547 ] 00:16:11.547 }' 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 97814 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97814 ']' 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 97814 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97814 00:16:11.547 killing process with pid 97814 00:16:11.547 Received shutdown signal, test time was about 60.000000 seconds 00:16:11.547 00:16:11.547 Latency(us) 00:16:11.547 [2024-10-15T01:17:24.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.547 [2024-10-15T01:17:24.271Z] =================================================================================================================== 00:16:11.547 [2024-10-15T01:17:24.271Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97814' 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 97814 00:16:11.547 [2024-10-15 01:17:24.250864] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:11.547 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 97814 00:16:11.547 [2024-10-15 01:17:24.251017] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.547 [2024-10-15 01:17:24.251070] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.547 [2024-10-15 01:17:24.251079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:16:11.808 [2024-10-15 01:17:24.285155] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:11.808 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:16:11.808 00:16:11.808 real 0m18.441s 00:16:11.808 user 0m24.745s 00:16:11.808 sys 0m2.526s 00:16:11.808 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:11.808 01:17:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.808 ************************************ 00:16:11.808 END TEST raid_rebuild_test_sb_md_separate 00:16:11.808 ************************************ 00:16:12.068 01:17:24 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:16:12.068 01:17:24 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:16:12.068 01:17:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:12.068 01:17:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:12.068 01:17:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:12.068 ************************************ 00:16:12.068 START TEST raid_state_function_test_sb_md_interleaved 00:16:12.068 ************************************ 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=98488 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98488' 00:16:12.068 Process raid pid: 98488 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 98488 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 98488 ']' 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:12.068 01:17:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.068 [2024-10-15 01:17:24.648258] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:16:12.068 [2024-10-15 01:17:24.648470] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.328 [2024-10-15 01:17:24.793902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.328 [2024-10-15 01:17:24.824201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.328 [2024-10-15 01:17:24.867532] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:12.328 [2024-10-15 01:17:24.867631] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:12.897 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:12.897 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:12.897 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:12.897 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.897 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.897 [2024-10-15 01:17:25.485872] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:12.897 [2024-10-15 01:17:25.485930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:12.897 [2024-10-15 01:17:25.485942] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:12.897 [2024-10-15 01:17:25.485952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:12.897 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.897 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:12.897 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.897 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.897 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.897 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.897 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:12.897 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.897 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.897 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.897 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.897 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.897 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.897 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.897 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.897 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.898 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.898 "name": "Existed_Raid", 00:16:12.898 "uuid": "7baf64f2-af6a-4dd0-9a67-3e71813386b3", 00:16:12.898 "strip_size_kb": 0, 00:16:12.898 "state": "configuring", 00:16:12.898 "raid_level": "raid1", 00:16:12.898 "superblock": true, 00:16:12.898 "num_base_bdevs": 2, 00:16:12.898 "num_base_bdevs_discovered": 0, 00:16:12.898 "num_base_bdevs_operational": 2, 00:16:12.898 "base_bdevs_list": [ 00:16:12.898 { 00:16:12.898 "name": "BaseBdev1", 00:16:12.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.898 "is_configured": false, 00:16:12.898 "data_offset": 0, 00:16:12.898 "data_size": 0 00:16:12.898 }, 00:16:12.898 { 00:16:12.898 "name": "BaseBdev2", 00:16:12.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.898 "is_configured": false, 00:16:12.898 "data_offset": 0, 00:16:12.898 "data_size": 0 00:16:12.898 } 00:16:12.898 ] 00:16:12.898 }' 00:16:12.898 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.898 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.467 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:13.467 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.467 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.467 [2024-10-15 01:17:25.953016] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:13.467 [2024-10-15 01:17:25.953134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:16:13.467 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.467 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:13.467 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.468 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.468 [2024-10-15 01:17:25.964997] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:13.468 [2024-10-15 01:17:25.965089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:13.468 [2024-10-15 01:17:25.965119] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:13.468 [2024-10-15 01:17:25.965155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:13.468 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.468 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:16:13.468 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.468 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.468 [2024-10-15 01:17:25.985975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:13.468 BaseBdev1 00:16:13.468 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.468 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:13.468 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:13.468 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:13.468 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:13.468 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:13.468 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:13.468 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:13.468 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.468 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.468 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.468 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:13.468 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.468 01:17:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.468 [ 00:16:13.468 { 00:16:13.468 "name": "BaseBdev1", 00:16:13.468 "aliases": [ 00:16:13.468 "a1b0d01d-385b-43f6-97d9-96eb9ea8ac4d" 00:16:13.468 ], 00:16:13.468 "product_name": "Malloc disk", 00:16:13.468 "block_size": 4128, 00:16:13.468 "num_blocks": 8192, 00:16:13.468 "uuid": "a1b0d01d-385b-43f6-97d9-96eb9ea8ac4d", 00:16:13.468 "md_size": 32, 00:16:13.468 "md_interleave": true, 00:16:13.468 "dif_type": 0, 00:16:13.468 "assigned_rate_limits": { 00:16:13.468 "rw_ios_per_sec": 0, 00:16:13.468 "rw_mbytes_per_sec": 0, 00:16:13.468 "r_mbytes_per_sec": 0, 00:16:13.468 "w_mbytes_per_sec": 0 00:16:13.468 }, 00:16:13.468 "claimed": true, 00:16:13.468 "claim_type": "exclusive_write", 00:16:13.468 "zoned": false, 00:16:13.468 "supported_io_types": { 00:16:13.468 "read": true, 00:16:13.468 "write": true, 00:16:13.468 "unmap": true, 00:16:13.468 "flush": true, 00:16:13.468 "reset": true, 00:16:13.468 "nvme_admin": false, 00:16:13.468 "nvme_io": false, 00:16:13.468 "nvme_io_md": false, 00:16:13.468 "write_zeroes": true, 00:16:13.468 "zcopy": true, 00:16:13.468 "get_zone_info": false, 00:16:13.468 "zone_management": false, 00:16:13.468 "zone_append": false, 00:16:13.468 "compare": false, 00:16:13.468 "compare_and_write": false, 00:16:13.468 "abort": true, 00:16:13.468 "seek_hole": false, 00:16:13.468 "seek_data": false, 00:16:13.468 "copy": true, 00:16:13.468 "nvme_iov_md": false 00:16:13.468 }, 00:16:13.468 "memory_domains": [ 00:16:13.468 { 00:16:13.468 "dma_device_id": "system", 00:16:13.468 "dma_device_type": 1 00:16:13.468 }, 00:16:13.468 { 00:16:13.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.468 "dma_device_type": 2 00:16:13.468 } 00:16:13.468 ], 00:16:13.468 "driver_specific": {} 00:16:13.468 } 00:16:13.468 ] 00:16:13.468 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.468 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:13.468 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:13.468 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.468 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.468 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.468 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.468 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:13.468 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.468 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.468 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.468 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.468 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.468 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.468 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.468 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.468 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.468 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.468 "name": "Existed_Raid", 00:16:13.468 "uuid": "9f5e10a3-0805-4081-8d3e-0b0eff74a010", 00:16:13.468 "strip_size_kb": 0, 00:16:13.468 "state": "configuring", 00:16:13.468 "raid_level": "raid1", 00:16:13.468 "superblock": true, 00:16:13.468 "num_base_bdevs": 2, 00:16:13.468 "num_base_bdevs_discovered": 1, 00:16:13.468 "num_base_bdevs_operational": 2, 00:16:13.468 "base_bdevs_list": [ 00:16:13.468 { 00:16:13.468 "name": "BaseBdev1", 00:16:13.468 "uuid": "a1b0d01d-385b-43f6-97d9-96eb9ea8ac4d", 00:16:13.468 "is_configured": true, 00:16:13.468 "data_offset": 256, 00:16:13.468 "data_size": 7936 00:16:13.468 }, 00:16:13.468 { 00:16:13.468 "name": "BaseBdev2", 00:16:13.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.468 "is_configured": false, 00:16:13.468 "data_offset": 0, 00:16:13.468 "data_size": 0 00:16:13.468 } 00:16:13.468 ] 00:16:13.468 }' 00:16:13.468 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.468 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.039 [2024-10-15 01:17:26.465251] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:14.039 [2024-10-15 01:17:26.465311] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.039 [2024-10-15 01:17:26.477287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.039 [2024-10-15 01:17:26.479161] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:14.039 [2024-10-15 01:17:26.479219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.039 "name": "Existed_Raid", 00:16:14.039 "uuid": "2f996c72-8f84-4ec7-8e10-ebe0424a6fe3", 00:16:14.039 "strip_size_kb": 0, 00:16:14.039 "state": "configuring", 00:16:14.039 "raid_level": "raid1", 00:16:14.039 "superblock": true, 00:16:14.039 "num_base_bdevs": 2, 00:16:14.039 "num_base_bdevs_discovered": 1, 00:16:14.039 "num_base_bdevs_operational": 2, 00:16:14.039 "base_bdevs_list": [ 00:16:14.039 { 00:16:14.039 "name": "BaseBdev1", 00:16:14.039 "uuid": "a1b0d01d-385b-43f6-97d9-96eb9ea8ac4d", 00:16:14.039 "is_configured": true, 00:16:14.039 "data_offset": 256, 00:16:14.039 "data_size": 7936 00:16:14.039 }, 00:16:14.039 { 00:16:14.039 "name": "BaseBdev2", 00:16:14.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.039 "is_configured": false, 00:16:14.039 "data_offset": 0, 00:16:14.039 "data_size": 0 00:16:14.039 } 00:16:14.039 ] 00:16:14.039 }' 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.039 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.299 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:16:14.299 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.299 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.299 [2024-10-15 01:17:26.987525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:14.299 [2024-10-15 01:17:26.987820] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:16:14.299 [2024-10-15 01:17:26.987871] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:14.299 [2024-10-15 01:17:26.987991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:16:14.299 [2024-10-15 01:17:26.988097] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:16:14.299 [2024-10-15 01:17:26.988145] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:16:14.299 BaseBdev2 00:16:14.299 [2024-10-15 01:17:26.988272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.299 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.299 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:14.299 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:14.299 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:14.299 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:14.300 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:14.300 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:14.300 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:14.300 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.300 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.300 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.300 01:17:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:14.300 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.300 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.300 [ 00:16:14.300 { 00:16:14.300 "name": "BaseBdev2", 00:16:14.300 "aliases": [ 00:16:14.300 "8af837e3-a42e-4a1a-a271-11b1662dd4a4" 00:16:14.300 ], 00:16:14.300 "product_name": "Malloc disk", 00:16:14.300 "block_size": 4128, 00:16:14.300 "num_blocks": 8192, 00:16:14.300 "uuid": "8af837e3-a42e-4a1a-a271-11b1662dd4a4", 00:16:14.300 "md_size": 32, 00:16:14.300 "md_interleave": true, 00:16:14.300 "dif_type": 0, 00:16:14.300 "assigned_rate_limits": { 00:16:14.300 "rw_ios_per_sec": 0, 00:16:14.300 "rw_mbytes_per_sec": 0, 00:16:14.300 "r_mbytes_per_sec": 0, 00:16:14.300 "w_mbytes_per_sec": 0 00:16:14.300 }, 00:16:14.300 "claimed": true, 00:16:14.300 "claim_type": "exclusive_write", 00:16:14.300 "zoned": false, 00:16:14.300 "supported_io_types": { 00:16:14.300 "read": true, 00:16:14.300 "write": true, 00:16:14.300 "unmap": true, 00:16:14.300 "flush": true, 00:16:14.300 "reset": true, 00:16:14.300 "nvme_admin": false, 00:16:14.300 "nvme_io": false, 00:16:14.300 "nvme_io_md": false, 00:16:14.300 "write_zeroes": true, 00:16:14.300 "zcopy": true, 00:16:14.300 "get_zone_info": false, 00:16:14.300 "zone_management": false, 00:16:14.300 "zone_append": false, 00:16:14.300 "compare": false, 00:16:14.300 "compare_and_write": false, 00:16:14.300 "abort": true, 00:16:14.300 "seek_hole": false, 00:16:14.300 "seek_data": false, 00:16:14.300 "copy": true, 00:16:14.300 "nvme_iov_md": false 00:16:14.300 }, 00:16:14.300 "memory_domains": [ 00:16:14.300 { 00:16:14.300 "dma_device_id": "system", 00:16:14.300 "dma_device_type": 1 00:16:14.300 }, 00:16:14.300 { 00:16:14.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.560 "dma_device_type": 2 00:16:14.560 } 00:16:14.560 ], 00:16:14.560 "driver_specific": {} 00:16:14.560 } 00:16:14.560 ] 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.560 "name": "Existed_Raid", 00:16:14.560 "uuid": "2f996c72-8f84-4ec7-8e10-ebe0424a6fe3", 00:16:14.560 "strip_size_kb": 0, 00:16:14.560 "state": "online", 00:16:14.560 "raid_level": "raid1", 00:16:14.560 "superblock": true, 00:16:14.560 "num_base_bdevs": 2, 00:16:14.560 "num_base_bdevs_discovered": 2, 00:16:14.560 "num_base_bdevs_operational": 2, 00:16:14.560 "base_bdevs_list": [ 00:16:14.560 { 00:16:14.560 "name": "BaseBdev1", 00:16:14.560 "uuid": "a1b0d01d-385b-43f6-97d9-96eb9ea8ac4d", 00:16:14.560 "is_configured": true, 00:16:14.560 "data_offset": 256, 00:16:14.560 "data_size": 7936 00:16:14.560 }, 00:16:14.560 { 00:16:14.560 "name": "BaseBdev2", 00:16:14.560 "uuid": "8af837e3-a42e-4a1a-a271-11b1662dd4a4", 00:16:14.560 "is_configured": true, 00:16:14.560 "data_offset": 256, 00:16:14.560 "data_size": 7936 00:16:14.560 } 00:16:14.560 ] 00:16:14.560 }' 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.560 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.821 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:14.821 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:14.821 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:14.821 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:14.821 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:14.821 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:14.821 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:14.821 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.821 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.821 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:14.821 [2024-10-15 01:17:27.483038] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.821 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.821 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:14.821 "name": "Existed_Raid", 00:16:14.821 "aliases": [ 00:16:14.821 "2f996c72-8f84-4ec7-8e10-ebe0424a6fe3" 00:16:14.821 ], 00:16:14.821 "product_name": "Raid Volume", 00:16:14.821 "block_size": 4128, 00:16:14.821 "num_blocks": 7936, 00:16:14.821 "uuid": "2f996c72-8f84-4ec7-8e10-ebe0424a6fe3", 00:16:14.821 "md_size": 32, 00:16:14.821 "md_interleave": true, 00:16:14.821 "dif_type": 0, 00:16:14.821 "assigned_rate_limits": { 00:16:14.821 "rw_ios_per_sec": 0, 00:16:14.821 "rw_mbytes_per_sec": 0, 00:16:14.821 "r_mbytes_per_sec": 0, 00:16:14.821 "w_mbytes_per_sec": 0 00:16:14.821 }, 00:16:14.821 "claimed": false, 00:16:14.821 "zoned": false, 00:16:14.821 "supported_io_types": { 00:16:14.821 "read": true, 00:16:14.821 "write": true, 00:16:14.821 "unmap": false, 00:16:14.821 "flush": false, 00:16:14.821 "reset": true, 00:16:14.821 "nvme_admin": false, 00:16:14.821 "nvme_io": false, 00:16:14.821 "nvme_io_md": false, 00:16:14.821 "write_zeroes": true, 00:16:14.821 "zcopy": false, 00:16:14.821 "get_zone_info": false, 00:16:14.821 "zone_management": false, 00:16:14.821 "zone_append": false, 00:16:14.821 "compare": false, 00:16:14.821 "compare_and_write": false, 00:16:14.821 "abort": false, 00:16:14.821 "seek_hole": false, 00:16:14.821 "seek_data": false, 00:16:14.821 "copy": false, 00:16:14.821 "nvme_iov_md": false 00:16:14.821 }, 00:16:14.821 "memory_domains": [ 00:16:14.821 { 00:16:14.821 "dma_device_id": "system", 00:16:14.821 "dma_device_type": 1 00:16:14.821 }, 00:16:14.821 { 00:16:14.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.821 "dma_device_type": 2 00:16:14.821 }, 00:16:14.821 { 00:16:14.821 "dma_device_id": "system", 00:16:14.821 "dma_device_type": 1 00:16:14.821 }, 00:16:14.821 { 00:16:14.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.821 "dma_device_type": 2 00:16:14.821 } 00:16:14.821 ], 00:16:14.821 "driver_specific": { 00:16:14.821 "raid": { 00:16:14.821 "uuid": "2f996c72-8f84-4ec7-8e10-ebe0424a6fe3", 00:16:14.821 "strip_size_kb": 0, 00:16:14.821 "state": "online", 00:16:14.821 "raid_level": "raid1", 00:16:14.821 "superblock": true, 00:16:14.821 "num_base_bdevs": 2, 00:16:14.821 "num_base_bdevs_discovered": 2, 00:16:14.821 "num_base_bdevs_operational": 2, 00:16:14.821 "base_bdevs_list": [ 00:16:14.821 { 00:16:14.821 "name": "BaseBdev1", 00:16:14.821 "uuid": "a1b0d01d-385b-43f6-97d9-96eb9ea8ac4d", 00:16:14.821 "is_configured": true, 00:16:14.821 "data_offset": 256, 00:16:14.821 "data_size": 7936 00:16:14.821 }, 00:16:14.821 { 00:16:14.821 "name": "BaseBdev2", 00:16:14.821 "uuid": "8af837e3-a42e-4a1a-a271-11b1662dd4a4", 00:16:14.821 "is_configured": true, 00:16:14.821 "data_offset": 256, 00:16:14.821 "data_size": 7936 00:16:14.821 } 00:16:14.821 ] 00:16:14.821 } 00:16:14.821 } 00:16:14.821 }' 00:16:14.821 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:15.081 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:15.082 BaseBdev2' 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:15.082 [2024-10-15 01:17:27.718442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.082 "name": "Existed_Raid", 00:16:15.082 "uuid": "2f996c72-8f84-4ec7-8e10-ebe0424a6fe3", 00:16:15.082 "strip_size_kb": 0, 00:16:15.082 "state": "online", 00:16:15.082 "raid_level": "raid1", 00:16:15.082 "superblock": true, 00:16:15.082 "num_base_bdevs": 2, 00:16:15.082 "num_base_bdevs_discovered": 1, 00:16:15.082 "num_base_bdevs_operational": 1, 00:16:15.082 "base_bdevs_list": [ 00:16:15.082 { 00:16:15.082 "name": null, 00:16:15.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.082 "is_configured": false, 00:16:15.082 "data_offset": 0, 00:16:15.082 "data_size": 7936 00:16:15.082 }, 00:16:15.082 { 00:16:15.082 "name": "BaseBdev2", 00:16:15.082 "uuid": "8af837e3-a42e-4a1a-a271-11b1662dd4a4", 00:16:15.082 "is_configured": true, 00:16:15.082 "data_offset": 256, 00:16:15.082 "data_size": 7936 00:16:15.082 } 00:16:15.082 ] 00:16:15.082 }' 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.082 01:17:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:15.651 [2024-10-15 01:17:28.249283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:15.651 [2024-10-15 01:17:28.249451] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:15.651 [2024-10-15 01:17:28.261458] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.651 [2024-10-15 01:17:28.261584] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:15.651 [2024-10-15 01:17:28.261629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 98488 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 98488 ']' 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 98488 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98488 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:15.651 killing process with pid 98488 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98488' 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 98488 00:16:15.651 [2024-10-15 01:17:28.362773] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:15.651 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 98488 00:16:15.651 [2024-10-15 01:17:28.363784] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:15.911 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:16:15.911 ************************************ 00:16:15.911 END TEST raid_state_function_test_sb_md_interleaved 00:16:15.911 ************************************ 00:16:15.911 00:16:15.911 real 0m4.018s 00:16:15.911 user 0m6.374s 00:16:15.911 sys 0m0.812s 00:16:15.911 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:15.911 01:17:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:15.911 01:17:28 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:16:15.911 01:17:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:15.911 01:17:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:15.911 01:17:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:16.172 ************************************ 00:16:16.172 START TEST raid_superblock_test_md_interleaved 00:16:16.172 ************************************ 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=98729 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 98729 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 98729 ']' 00:16:16.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:16.172 01:17:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.172 [2024-10-15 01:17:28.725592] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:16:16.172 [2024-10-15 01:17:28.725790] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98729 ] 00:16:16.172 [2024-10-15 01:17:28.864837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.172 [2024-10-15 01:17:28.893376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.432 [2024-10-15 01:17:28.936029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.432 [2024-10-15 01:17:28.936139] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.002 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:17.002 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:17.002 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.003 malloc1 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.003 [2024-10-15 01:17:29.591227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:17.003 [2024-10-15 01:17:29.591354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.003 [2024-10-15 01:17:29.591399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:17.003 [2024-10-15 01:17:29.591445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.003 [2024-10-15 01:17:29.593411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.003 [2024-10-15 01:17:29.593483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:17.003 pt1 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.003 malloc2 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.003 [2024-10-15 01:17:29.624144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:17.003 [2024-10-15 01:17:29.624227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.003 [2024-10-15 01:17:29.624247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:17.003 [2024-10-15 01:17:29.624258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.003 [2024-10-15 01:17:29.626123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.003 [2024-10-15 01:17:29.626161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:17.003 pt2 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.003 [2024-10-15 01:17:29.636167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:17.003 [2024-10-15 01:17:29.638081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:17.003 [2024-10-15 01:17:29.638274] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:16:17.003 [2024-10-15 01:17:29.638291] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:17.003 [2024-10-15 01:17:29.638398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:16:17.003 [2024-10-15 01:17:29.638464] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:16:17.003 [2024-10-15 01:17:29.638473] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:16:17.003 [2024-10-15 01:17:29.638543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.003 "name": "raid_bdev1", 00:16:17.003 "uuid": "56d5e80c-19e4-46e2-88cb-b80bb647435b", 00:16:17.003 "strip_size_kb": 0, 00:16:17.003 "state": "online", 00:16:17.003 "raid_level": "raid1", 00:16:17.003 "superblock": true, 00:16:17.003 "num_base_bdevs": 2, 00:16:17.003 "num_base_bdevs_discovered": 2, 00:16:17.003 "num_base_bdevs_operational": 2, 00:16:17.003 "base_bdevs_list": [ 00:16:17.003 { 00:16:17.003 "name": "pt1", 00:16:17.003 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.003 "is_configured": true, 00:16:17.003 "data_offset": 256, 00:16:17.003 "data_size": 7936 00:16:17.003 }, 00:16:17.003 { 00:16:17.003 "name": "pt2", 00:16:17.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.003 "is_configured": true, 00:16:17.003 "data_offset": 256, 00:16:17.003 "data_size": 7936 00:16:17.003 } 00:16:17.003 ] 00:16:17.003 }' 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.003 01:17:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.572 [2024-10-15 01:17:30.115679] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:17.572 "name": "raid_bdev1", 00:16:17.572 "aliases": [ 00:16:17.572 "56d5e80c-19e4-46e2-88cb-b80bb647435b" 00:16:17.572 ], 00:16:17.572 "product_name": "Raid Volume", 00:16:17.572 "block_size": 4128, 00:16:17.572 "num_blocks": 7936, 00:16:17.572 "uuid": "56d5e80c-19e4-46e2-88cb-b80bb647435b", 00:16:17.572 "md_size": 32, 00:16:17.572 "md_interleave": true, 00:16:17.572 "dif_type": 0, 00:16:17.572 "assigned_rate_limits": { 00:16:17.572 "rw_ios_per_sec": 0, 00:16:17.572 "rw_mbytes_per_sec": 0, 00:16:17.572 "r_mbytes_per_sec": 0, 00:16:17.572 "w_mbytes_per_sec": 0 00:16:17.572 }, 00:16:17.572 "claimed": false, 00:16:17.572 "zoned": false, 00:16:17.572 "supported_io_types": { 00:16:17.572 "read": true, 00:16:17.572 "write": true, 00:16:17.572 "unmap": false, 00:16:17.572 "flush": false, 00:16:17.572 "reset": true, 00:16:17.572 "nvme_admin": false, 00:16:17.572 "nvme_io": false, 00:16:17.572 "nvme_io_md": false, 00:16:17.572 "write_zeroes": true, 00:16:17.572 "zcopy": false, 00:16:17.572 "get_zone_info": false, 00:16:17.572 "zone_management": false, 00:16:17.572 "zone_append": false, 00:16:17.572 "compare": false, 00:16:17.572 "compare_and_write": false, 00:16:17.572 "abort": false, 00:16:17.572 "seek_hole": false, 00:16:17.572 "seek_data": false, 00:16:17.572 "copy": false, 00:16:17.572 "nvme_iov_md": false 00:16:17.572 }, 00:16:17.572 "memory_domains": [ 00:16:17.572 { 00:16:17.572 "dma_device_id": "system", 00:16:17.572 "dma_device_type": 1 00:16:17.572 }, 00:16:17.572 { 00:16:17.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.572 "dma_device_type": 2 00:16:17.572 }, 00:16:17.572 { 00:16:17.572 "dma_device_id": "system", 00:16:17.572 "dma_device_type": 1 00:16:17.572 }, 00:16:17.572 { 00:16:17.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.572 "dma_device_type": 2 00:16:17.572 } 00:16:17.572 ], 00:16:17.572 "driver_specific": { 00:16:17.572 "raid": { 00:16:17.572 "uuid": "56d5e80c-19e4-46e2-88cb-b80bb647435b", 00:16:17.572 "strip_size_kb": 0, 00:16:17.572 "state": "online", 00:16:17.572 "raid_level": "raid1", 00:16:17.572 "superblock": true, 00:16:17.572 "num_base_bdevs": 2, 00:16:17.572 "num_base_bdevs_discovered": 2, 00:16:17.572 "num_base_bdevs_operational": 2, 00:16:17.572 "base_bdevs_list": [ 00:16:17.572 { 00:16:17.572 "name": "pt1", 00:16:17.572 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.572 "is_configured": true, 00:16:17.572 "data_offset": 256, 00:16:17.572 "data_size": 7936 00:16:17.572 }, 00:16:17.572 { 00:16:17.572 "name": "pt2", 00:16:17.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.572 "is_configured": true, 00:16:17.572 "data_offset": 256, 00:16:17.572 "data_size": 7936 00:16:17.572 } 00:16:17.572 ] 00:16:17.572 } 00:16:17.572 } 00:16:17.572 }' 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:17.572 pt2' 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.572 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.833 [2024-10-15 01:17:30.363162] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=56d5e80c-19e4-46e2-88cb-b80bb647435b 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 56d5e80c-19e4-46e2-88cb-b80bb647435b ']' 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.833 [2024-10-15 01:17:30.406819] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:17.833 [2024-10-15 01:17:30.406910] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:17.833 [2024-10-15 01:17:30.407041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:17.833 [2024-10-15 01:17:30.407131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:17.833 [2024-10-15 01:17:30.407198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.833 [2024-10-15 01:17:30.534625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:17.833 [2024-10-15 01:17:30.536554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:17.833 [2024-10-15 01:17:30.536624] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:17.833 [2024-10-15 01:17:30.536700] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:17.833 [2024-10-15 01:17:30.536719] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:17.833 [2024-10-15 01:17:30.536740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:16:17.833 request: 00:16:17.833 { 00:16:17.833 "name": "raid_bdev1", 00:16:17.833 "raid_level": "raid1", 00:16:17.833 "base_bdevs": [ 00:16:17.833 "malloc1", 00:16:17.833 "malloc2" 00:16:17.833 ], 00:16:17.833 "superblock": false, 00:16:17.833 "method": "bdev_raid_create", 00:16:17.833 "req_id": 1 00:16:17.833 } 00:16:17.833 Got JSON-RPC error response 00:16:17.833 response: 00:16:17.833 { 00:16:17.833 "code": -17, 00:16:17.833 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:17.833 } 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.833 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.093 [2024-10-15 01:17:30.598464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:18.093 [2024-10-15 01:17:30.598592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.093 [2024-10-15 01:17:30.598631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:18.093 [2024-10-15 01:17:30.598664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.093 [2024-10-15 01:17:30.600677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.093 [2024-10-15 01:17:30.600749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:18.093 [2024-10-15 01:17:30.600837] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:18.093 [2024-10-15 01:17:30.600906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:18.093 pt1 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.093 "name": "raid_bdev1", 00:16:18.093 "uuid": "56d5e80c-19e4-46e2-88cb-b80bb647435b", 00:16:18.093 "strip_size_kb": 0, 00:16:18.093 "state": "configuring", 00:16:18.093 "raid_level": "raid1", 00:16:18.093 "superblock": true, 00:16:18.093 "num_base_bdevs": 2, 00:16:18.093 "num_base_bdevs_discovered": 1, 00:16:18.093 "num_base_bdevs_operational": 2, 00:16:18.093 "base_bdevs_list": [ 00:16:18.093 { 00:16:18.093 "name": "pt1", 00:16:18.093 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:18.093 "is_configured": true, 00:16:18.093 "data_offset": 256, 00:16:18.093 "data_size": 7936 00:16:18.093 }, 00:16:18.093 { 00:16:18.093 "name": null, 00:16:18.093 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.093 "is_configured": false, 00:16:18.093 "data_offset": 256, 00:16:18.093 "data_size": 7936 00:16:18.093 } 00:16:18.093 ] 00:16:18.093 }' 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.093 01:17:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.353 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:18.353 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:18.353 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:18.353 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:18.353 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.353 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.353 [2024-10-15 01:17:31.065701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:18.353 [2024-10-15 01:17:31.065769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.353 [2024-10-15 01:17:31.065808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:18.353 [2024-10-15 01:17:31.065817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.353 [2024-10-15 01:17:31.066001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.353 [2024-10-15 01:17:31.066015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:18.353 [2024-10-15 01:17:31.066069] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:18.353 [2024-10-15 01:17:31.066097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:18.353 [2024-10-15 01:17:31.066183] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:16:18.353 [2024-10-15 01:17:31.066192] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:18.353 [2024-10-15 01:17:31.066278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:18.354 [2024-10-15 01:17:31.066335] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:16:18.354 [2024-10-15 01:17:31.066347] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:16:18.354 [2024-10-15 01:17:31.066402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.354 pt2 00:16:18.354 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.354 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:18.354 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:18.354 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:18.354 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.354 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.354 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.354 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.354 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:18.354 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.354 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.354 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.354 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.614 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.614 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.614 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.614 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.614 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.614 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.614 "name": "raid_bdev1", 00:16:18.614 "uuid": "56d5e80c-19e4-46e2-88cb-b80bb647435b", 00:16:18.614 "strip_size_kb": 0, 00:16:18.614 "state": "online", 00:16:18.614 "raid_level": "raid1", 00:16:18.614 "superblock": true, 00:16:18.614 "num_base_bdevs": 2, 00:16:18.614 "num_base_bdevs_discovered": 2, 00:16:18.614 "num_base_bdevs_operational": 2, 00:16:18.614 "base_bdevs_list": [ 00:16:18.614 { 00:16:18.614 "name": "pt1", 00:16:18.614 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:18.614 "is_configured": true, 00:16:18.614 "data_offset": 256, 00:16:18.614 "data_size": 7936 00:16:18.614 }, 00:16:18.614 { 00:16:18.614 "name": "pt2", 00:16:18.614 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.614 "is_configured": true, 00:16:18.614 "data_offset": 256, 00:16:18.614 "data_size": 7936 00:16:18.614 } 00:16:18.614 ] 00:16:18.614 }' 00:16:18.614 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.614 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.874 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:18.874 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:18.874 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:18.874 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:18.874 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:18.874 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:18.874 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:18.874 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:18.874 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.874 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.874 [2024-10-15 01:17:31.497295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:18.874 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.874 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:18.874 "name": "raid_bdev1", 00:16:18.874 "aliases": [ 00:16:18.874 "56d5e80c-19e4-46e2-88cb-b80bb647435b" 00:16:18.874 ], 00:16:18.874 "product_name": "Raid Volume", 00:16:18.874 "block_size": 4128, 00:16:18.874 "num_blocks": 7936, 00:16:18.874 "uuid": "56d5e80c-19e4-46e2-88cb-b80bb647435b", 00:16:18.874 "md_size": 32, 00:16:18.874 "md_interleave": true, 00:16:18.874 "dif_type": 0, 00:16:18.874 "assigned_rate_limits": { 00:16:18.874 "rw_ios_per_sec": 0, 00:16:18.874 "rw_mbytes_per_sec": 0, 00:16:18.874 "r_mbytes_per_sec": 0, 00:16:18.874 "w_mbytes_per_sec": 0 00:16:18.874 }, 00:16:18.874 "claimed": false, 00:16:18.874 "zoned": false, 00:16:18.874 "supported_io_types": { 00:16:18.874 "read": true, 00:16:18.874 "write": true, 00:16:18.874 "unmap": false, 00:16:18.874 "flush": false, 00:16:18.874 "reset": true, 00:16:18.874 "nvme_admin": false, 00:16:18.875 "nvme_io": false, 00:16:18.875 "nvme_io_md": false, 00:16:18.875 "write_zeroes": true, 00:16:18.875 "zcopy": false, 00:16:18.875 "get_zone_info": false, 00:16:18.875 "zone_management": false, 00:16:18.875 "zone_append": false, 00:16:18.875 "compare": false, 00:16:18.875 "compare_and_write": false, 00:16:18.875 "abort": false, 00:16:18.875 "seek_hole": false, 00:16:18.875 "seek_data": false, 00:16:18.875 "copy": false, 00:16:18.875 "nvme_iov_md": false 00:16:18.875 }, 00:16:18.875 "memory_domains": [ 00:16:18.875 { 00:16:18.875 "dma_device_id": "system", 00:16:18.875 "dma_device_type": 1 00:16:18.875 }, 00:16:18.875 { 00:16:18.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.875 "dma_device_type": 2 00:16:18.875 }, 00:16:18.875 { 00:16:18.875 "dma_device_id": "system", 00:16:18.875 "dma_device_type": 1 00:16:18.875 }, 00:16:18.875 { 00:16:18.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.875 "dma_device_type": 2 00:16:18.875 } 00:16:18.875 ], 00:16:18.875 "driver_specific": { 00:16:18.875 "raid": { 00:16:18.875 "uuid": "56d5e80c-19e4-46e2-88cb-b80bb647435b", 00:16:18.875 "strip_size_kb": 0, 00:16:18.875 "state": "online", 00:16:18.875 "raid_level": "raid1", 00:16:18.875 "superblock": true, 00:16:18.875 "num_base_bdevs": 2, 00:16:18.875 "num_base_bdevs_discovered": 2, 00:16:18.875 "num_base_bdevs_operational": 2, 00:16:18.875 "base_bdevs_list": [ 00:16:18.875 { 00:16:18.875 "name": "pt1", 00:16:18.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:18.875 "is_configured": true, 00:16:18.875 "data_offset": 256, 00:16:18.875 "data_size": 7936 00:16:18.875 }, 00:16:18.875 { 00:16:18.875 "name": "pt2", 00:16:18.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.875 "is_configured": true, 00:16:18.875 "data_offset": 256, 00:16:18.875 "data_size": 7936 00:16:18.875 } 00:16:18.875 ] 00:16:18.875 } 00:16:18.875 } 00:16:18.875 }' 00:16:18.875 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:18.875 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:18.875 pt2' 00:16:18.875 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.135 [2024-10-15 01:17:31.728863] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 56d5e80c-19e4-46e2-88cb-b80bb647435b '!=' 56d5e80c-19e4-46e2-88cb-b80bb647435b ']' 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.135 [2024-10-15 01:17:31.756574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.135 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.136 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.136 "name": "raid_bdev1", 00:16:19.136 "uuid": "56d5e80c-19e4-46e2-88cb-b80bb647435b", 00:16:19.136 "strip_size_kb": 0, 00:16:19.136 "state": "online", 00:16:19.136 "raid_level": "raid1", 00:16:19.136 "superblock": true, 00:16:19.136 "num_base_bdevs": 2, 00:16:19.136 "num_base_bdevs_discovered": 1, 00:16:19.136 "num_base_bdevs_operational": 1, 00:16:19.136 "base_bdevs_list": [ 00:16:19.136 { 00:16:19.136 "name": null, 00:16:19.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.136 "is_configured": false, 00:16:19.136 "data_offset": 0, 00:16:19.136 "data_size": 7936 00:16:19.136 }, 00:16:19.136 { 00:16:19.136 "name": "pt2", 00:16:19.136 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.136 "is_configured": true, 00:16:19.136 "data_offset": 256, 00:16:19.136 "data_size": 7936 00:16:19.136 } 00:16:19.136 ] 00:16:19.136 }' 00:16:19.136 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.136 01:17:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.706 [2024-10-15 01:17:32.215778] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.706 [2024-10-15 01:17:32.215877] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.706 [2024-10-15 01:17:32.216001] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.706 [2024-10-15 01:17:32.216076] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.706 [2024-10-15 01:17:32.216153] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.706 [2024-10-15 01:17:32.291612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:19.706 [2024-10-15 01:17:32.291674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.706 [2024-10-15 01:17:32.291694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:19.706 [2024-10-15 01:17:32.291703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.706 [2024-10-15 01:17:32.293655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.706 [2024-10-15 01:17:32.293693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:19.706 [2024-10-15 01:17:32.293749] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:19.706 [2024-10-15 01:17:32.293796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:19.706 [2024-10-15 01:17:32.293857] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:16:19.706 [2024-10-15 01:17:32.293865] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:19.706 [2024-10-15 01:17:32.293941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:19.706 [2024-10-15 01:17:32.293997] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:16:19.706 [2024-10-15 01:17:32.294005] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:16:19.706 [2024-10-15 01:17:32.294063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.706 pt2 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.706 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.707 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.707 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.707 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.707 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.707 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.707 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.707 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.707 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.707 "name": "raid_bdev1", 00:16:19.707 "uuid": "56d5e80c-19e4-46e2-88cb-b80bb647435b", 00:16:19.707 "strip_size_kb": 0, 00:16:19.707 "state": "online", 00:16:19.707 "raid_level": "raid1", 00:16:19.707 "superblock": true, 00:16:19.707 "num_base_bdevs": 2, 00:16:19.707 "num_base_bdevs_discovered": 1, 00:16:19.707 "num_base_bdevs_operational": 1, 00:16:19.707 "base_bdevs_list": [ 00:16:19.707 { 00:16:19.707 "name": null, 00:16:19.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.707 "is_configured": false, 00:16:19.707 "data_offset": 256, 00:16:19.707 "data_size": 7936 00:16:19.707 }, 00:16:19.707 { 00:16:19.707 "name": "pt2", 00:16:19.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.707 "is_configured": true, 00:16:19.707 "data_offset": 256, 00:16:19.707 "data_size": 7936 00:16:19.707 } 00:16:19.707 ] 00:16:19.707 }' 00:16:19.707 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.707 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.277 [2024-10-15 01:17:32.802754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:20.277 [2024-10-15 01:17:32.802846] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.277 [2024-10-15 01:17:32.802959] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.277 [2024-10-15 01:17:32.803027] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.277 [2024-10-15 01:17:32.803082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.277 [2024-10-15 01:17:32.866664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:20.277 [2024-10-15 01:17:32.866737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.277 [2024-10-15 01:17:32.866757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:16:20.277 [2024-10-15 01:17:32.866774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.277 [2024-10-15 01:17:32.868731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.277 [2024-10-15 01:17:32.868769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:20.277 [2024-10-15 01:17:32.868843] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:20.277 [2024-10-15 01:17:32.868881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:20.277 [2024-10-15 01:17:32.868973] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:20.277 [2024-10-15 01:17:32.868988] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:20.277 [2024-10-15 01:17:32.869007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:16:20.277 [2024-10-15 01:17:32.869042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:20.277 [2024-10-15 01:17:32.869115] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:16:20.277 [2024-10-15 01:17:32.869131] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:20.277 [2024-10-15 01:17:32.869232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:20.277 [2024-10-15 01:17:32.869289] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:16:20.277 [2024-10-15 01:17:32.869296] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:16:20.277 [2024-10-15 01:17:32.869364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.277 pt1 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.277 "name": "raid_bdev1", 00:16:20.277 "uuid": "56d5e80c-19e4-46e2-88cb-b80bb647435b", 00:16:20.277 "strip_size_kb": 0, 00:16:20.277 "state": "online", 00:16:20.277 "raid_level": "raid1", 00:16:20.277 "superblock": true, 00:16:20.277 "num_base_bdevs": 2, 00:16:20.277 "num_base_bdevs_discovered": 1, 00:16:20.277 "num_base_bdevs_operational": 1, 00:16:20.277 "base_bdevs_list": [ 00:16:20.277 { 00:16:20.277 "name": null, 00:16:20.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.277 "is_configured": false, 00:16:20.277 "data_offset": 256, 00:16:20.277 "data_size": 7936 00:16:20.277 }, 00:16:20.277 { 00:16:20.277 "name": "pt2", 00:16:20.277 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:20.277 "is_configured": true, 00:16:20.277 "data_offset": 256, 00:16:20.277 "data_size": 7936 00:16:20.277 } 00:16:20.277 ] 00:16:20.277 }' 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.277 01:17:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.847 [2024-10-15 01:17:33.394040] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 56d5e80c-19e4-46e2-88cb-b80bb647435b '!=' 56d5e80c-19e4-46e2-88cb-b80bb647435b ']' 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 98729 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 98729 ']' 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 98729 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98729 00:16:20.847 killing process with pid 98729 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98729' 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 98729 00:16:20.847 [2024-10-15 01:17:33.465405] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:20.847 [2024-10-15 01:17:33.465494] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.847 [2024-10-15 01:17:33.465543] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.847 [2024-10-15 01:17:33.465553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:16:20.847 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 98729 00:16:20.847 [2024-10-15 01:17:33.489438] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:21.108 ************************************ 00:16:21.108 END TEST raid_superblock_test_md_interleaved 00:16:21.108 ************************************ 00:16:21.108 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:16:21.108 00:16:21.108 real 0m5.049s 00:16:21.108 user 0m8.358s 00:16:21.108 sys 0m1.037s 00:16:21.108 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:21.108 01:17:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.108 01:17:33 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:16:21.108 01:17:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:21.108 01:17:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:21.108 01:17:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:21.108 ************************************ 00:16:21.108 START TEST raid_rebuild_test_sb_md_interleaved 00:16:21.108 ************************************ 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=99041 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 99041 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99041 ']' 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:21.108 01:17:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.368 [2024-10-15 01:17:33.885838] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:16:21.368 [2024-10-15 01:17:33.886103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99041 ] 00:16:21.368 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:21.368 Zero copy mechanism will not be used. 00:16:21.368 [2024-10-15 01:17:34.034783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.368 [2024-10-15 01:17:34.065544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.628 [2024-10-15 01:17:34.109028] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.628 [2024-10-15 01:17:34.109132] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.198 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:22.198 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:22.198 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:22.198 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:16:22.198 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.198 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.198 BaseBdev1_malloc 00:16:22.198 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.198 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:22.198 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.198 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.198 [2024-10-15 01:17:34.775958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:22.198 [2024-10-15 01:17:34.776029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.198 [2024-10-15 01:17:34.776057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:22.198 [2024-10-15 01:17:34.776068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.198 [2024-10-15 01:17:34.778128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.198 [2024-10-15 01:17:34.778169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:22.198 BaseBdev1 00:16:22.198 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.198 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:22.198 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:16:22.198 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.198 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.198 BaseBdev2_malloc 00:16:22.198 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.198 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:22.198 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.198 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.198 [2024-10-15 01:17:34.804893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:22.198 [2024-10-15 01:17:34.804948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.198 [2024-10-15 01:17:34.804971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:22.198 [2024-10-15 01:17:34.804980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.198 [2024-10-15 01:17:34.806865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.199 [2024-10-15 01:17:34.806910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:22.199 BaseBdev2 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.199 spare_malloc 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.199 spare_delay 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.199 [2024-10-15 01:17:34.845779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:22.199 [2024-10-15 01:17:34.845840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.199 [2024-10-15 01:17:34.845866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:22.199 [2024-10-15 01:17:34.845874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.199 [2024-10-15 01:17:34.847782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.199 [2024-10-15 01:17:34.847818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:22.199 spare 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.199 [2024-10-15 01:17:34.857810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.199 [2024-10-15 01:17:34.859602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:22.199 [2024-10-15 01:17:34.859773] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:16:22.199 [2024-10-15 01:17:34.859785] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:22.199 [2024-10-15 01:17:34.859870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:22.199 [2024-10-15 01:17:34.859938] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:16:22.199 [2024-10-15 01:17:34.859950] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:16:22.199 [2024-10-15 01:17:34.860019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.199 "name": "raid_bdev1", 00:16:22.199 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:22.199 "strip_size_kb": 0, 00:16:22.199 "state": "online", 00:16:22.199 "raid_level": "raid1", 00:16:22.199 "superblock": true, 00:16:22.199 "num_base_bdevs": 2, 00:16:22.199 "num_base_bdevs_discovered": 2, 00:16:22.199 "num_base_bdevs_operational": 2, 00:16:22.199 "base_bdevs_list": [ 00:16:22.199 { 00:16:22.199 "name": "BaseBdev1", 00:16:22.199 "uuid": "931f92a2-cd06-52c2-8da3-5421fef592b0", 00:16:22.199 "is_configured": true, 00:16:22.199 "data_offset": 256, 00:16:22.199 "data_size": 7936 00:16:22.199 }, 00:16:22.199 { 00:16:22.199 "name": "BaseBdev2", 00:16:22.199 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:22.199 "is_configured": true, 00:16:22.199 "data_offset": 256, 00:16:22.199 "data_size": 7936 00:16:22.199 } 00:16:22.199 ] 00:16:22.199 }' 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.199 01:17:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:22.769 [2024-10-15 01:17:35.309336] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.769 [2024-10-15 01:17:35.396883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.769 "name": "raid_bdev1", 00:16:22.769 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:22.769 "strip_size_kb": 0, 00:16:22.769 "state": "online", 00:16:22.769 "raid_level": "raid1", 00:16:22.769 "superblock": true, 00:16:22.769 "num_base_bdevs": 2, 00:16:22.769 "num_base_bdevs_discovered": 1, 00:16:22.769 "num_base_bdevs_operational": 1, 00:16:22.769 "base_bdevs_list": [ 00:16:22.769 { 00:16:22.769 "name": null, 00:16:22.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.769 "is_configured": false, 00:16:22.769 "data_offset": 0, 00:16:22.769 "data_size": 7936 00:16:22.769 }, 00:16:22.769 { 00:16:22.769 "name": "BaseBdev2", 00:16:22.769 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:22.769 "is_configured": true, 00:16:22.769 "data_offset": 256, 00:16:22.769 "data_size": 7936 00:16:22.769 } 00:16:22.769 ] 00:16:22.769 }' 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.769 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.339 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:23.339 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.339 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.339 [2024-10-15 01:17:35.856114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:23.340 [2024-10-15 01:17:35.870099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:23.340 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.340 01:17:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:23.340 [2024-10-15 01:17:35.872862] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:24.278 01:17:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.278 01:17:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.278 01:17:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.278 01:17:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.278 01:17:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.278 01:17:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.278 01:17:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.278 01:17:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.278 01:17:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.278 01:17:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.278 01:17:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.278 "name": "raid_bdev1", 00:16:24.278 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:24.278 "strip_size_kb": 0, 00:16:24.278 "state": "online", 00:16:24.278 "raid_level": "raid1", 00:16:24.278 "superblock": true, 00:16:24.278 "num_base_bdevs": 2, 00:16:24.278 "num_base_bdevs_discovered": 2, 00:16:24.278 "num_base_bdevs_operational": 2, 00:16:24.278 "process": { 00:16:24.278 "type": "rebuild", 00:16:24.278 "target": "spare", 00:16:24.278 "progress": { 00:16:24.278 "blocks": 2560, 00:16:24.278 "percent": 32 00:16:24.278 } 00:16:24.278 }, 00:16:24.278 "base_bdevs_list": [ 00:16:24.278 { 00:16:24.278 "name": "spare", 00:16:24.278 "uuid": "15eddaba-c193-54f5-93b6-ecc3133347ac", 00:16:24.278 "is_configured": true, 00:16:24.278 "data_offset": 256, 00:16:24.278 "data_size": 7936 00:16:24.278 }, 00:16:24.278 { 00:16:24.278 "name": "BaseBdev2", 00:16:24.278 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:24.278 "is_configured": true, 00:16:24.278 "data_offset": 256, 00:16:24.278 "data_size": 7936 00:16:24.278 } 00:16:24.278 ] 00:16:24.278 }' 00:16:24.278 01:17:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.278 01:17:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.278 01:17:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.538 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.538 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:24.538 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.538 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.538 [2024-10-15 01:17:37.020493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:24.538 [2024-10-15 01:17:37.078860] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:24.538 [2024-10-15 01:17:37.078933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.538 [2024-10-15 01:17:37.078951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:24.538 [2024-10-15 01:17:37.078959] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:24.538 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.539 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:24.539 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.539 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.539 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.539 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.539 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:24.539 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.539 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.539 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.539 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.539 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.539 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.539 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.539 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.539 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.539 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.539 "name": "raid_bdev1", 00:16:24.539 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:24.539 "strip_size_kb": 0, 00:16:24.539 "state": "online", 00:16:24.539 "raid_level": "raid1", 00:16:24.539 "superblock": true, 00:16:24.539 "num_base_bdevs": 2, 00:16:24.539 "num_base_bdevs_discovered": 1, 00:16:24.539 "num_base_bdevs_operational": 1, 00:16:24.539 "base_bdevs_list": [ 00:16:24.539 { 00:16:24.539 "name": null, 00:16:24.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.539 "is_configured": false, 00:16:24.539 "data_offset": 0, 00:16:24.539 "data_size": 7936 00:16:24.539 }, 00:16:24.539 { 00:16:24.539 "name": "BaseBdev2", 00:16:24.539 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:24.539 "is_configured": true, 00:16:24.539 "data_offset": 256, 00:16:24.539 "data_size": 7936 00:16:24.539 } 00:16:24.539 ] 00:16:24.539 }' 00:16:24.539 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.539 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.108 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.108 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.108 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.108 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.108 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.108 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.108 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.108 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.108 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.108 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.108 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.108 "name": "raid_bdev1", 00:16:25.108 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:25.108 "strip_size_kb": 0, 00:16:25.108 "state": "online", 00:16:25.108 "raid_level": "raid1", 00:16:25.108 "superblock": true, 00:16:25.108 "num_base_bdevs": 2, 00:16:25.108 "num_base_bdevs_discovered": 1, 00:16:25.108 "num_base_bdevs_operational": 1, 00:16:25.108 "base_bdevs_list": [ 00:16:25.108 { 00:16:25.108 "name": null, 00:16:25.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.108 "is_configured": false, 00:16:25.108 "data_offset": 0, 00:16:25.108 "data_size": 7936 00:16:25.108 }, 00:16:25.108 { 00:16:25.108 "name": "BaseBdev2", 00:16:25.108 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:25.108 "is_configured": true, 00:16:25.108 "data_offset": 256, 00:16:25.108 "data_size": 7936 00:16:25.108 } 00:16:25.108 ] 00:16:25.108 }' 00:16:25.108 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.108 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:25.108 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.108 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:25.108 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:25.108 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.108 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.108 [2024-10-15 01:17:37.686377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:25.108 [2024-10-15 01:17:37.690147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:25.108 [2024-10-15 01:17:37.692163] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:25.108 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.108 01:17:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:26.045 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.045 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.045 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.045 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.045 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.045 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.045 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.045 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.045 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.045 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.045 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.045 "name": "raid_bdev1", 00:16:26.045 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:26.045 "strip_size_kb": 0, 00:16:26.045 "state": "online", 00:16:26.045 "raid_level": "raid1", 00:16:26.045 "superblock": true, 00:16:26.045 "num_base_bdevs": 2, 00:16:26.045 "num_base_bdevs_discovered": 2, 00:16:26.045 "num_base_bdevs_operational": 2, 00:16:26.045 "process": { 00:16:26.045 "type": "rebuild", 00:16:26.045 "target": "spare", 00:16:26.045 "progress": { 00:16:26.045 "blocks": 2560, 00:16:26.045 "percent": 32 00:16:26.045 } 00:16:26.045 }, 00:16:26.045 "base_bdevs_list": [ 00:16:26.045 { 00:16:26.045 "name": "spare", 00:16:26.045 "uuid": "15eddaba-c193-54f5-93b6-ecc3133347ac", 00:16:26.045 "is_configured": true, 00:16:26.045 "data_offset": 256, 00:16:26.045 "data_size": 7936 00:16:26.045 }, 00:16:26.045 { 00:16:26.045 "name": "BaseBdev2", 00:16:26.045 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:26.045 "is_configured": true, 00:16:26.045 "data_offset": 256, 00:16:26.045 "data_size": 7936 00:16:26.045 } 00:16:26.045 ] 00:16:26.045 }' 00:16:26.045 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.305 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.305 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.305 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.305 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:26.305 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:26.305 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:26.305 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:26.305 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:26.305 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:26.305 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=610 00:16:26.305 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:26.305 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.305 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.305 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.305 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.305 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.305 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.305 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.305 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.305 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.305 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.305 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.305 "name": "raid_bdev1", 00:16:26.305 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:26.305 "strip_size_kb": 0, 00:16:26.305 "state": "online", 00:16:26.305 "raid_level": "raid1", 00:16:26.305 "superblock": true, 00:16:26.305 "num_base_bdevs": 2, 00:16:26.305 "num_base_bdevs_discovered": 2, 00:16:26.305 "num_base_bdevs_operational": 2, 00:16:26.305 "process": { 00:16:26.305 "type": "rebuild", 00:16:26.305 "target": "spare", 00:16:26.305 "progress": { 00:16:26.305 "blocks": 2816, 00:16:26.305 "percent": 35 00:16:26.305 } 00:16:26.305 }, 00:16:26.305 "base_bdevs_list": [ 00:16:26.305 { 00:16:26.305 "name": "spare", 00:16:26.305 "uuid": "15eddaba-c193-54f5-93b6-ecc3133347ac", 00:16:26.305 "is_configured": true, 00:16:26.305 "data_offset": 256, 00:16:26.306 "data_size": 7936 00:16:26.306 }, 00:16:26.306 { 00:16:26.306 "name": "BaseBdev2", 00:16:26.306 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:26.306 "is_configured": true, 00:16:26.306 "data_offset": 256, 00:16:26.306 "data_size": 7936 00:16:26.306 } 00:16:26.306 ] 00:16:26.306 }' 00:16:26.306 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.306 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.306 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.306 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.306 01:17:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:27.266 01:17:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:27.266 01:17:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.266 01:17:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.266 01:17:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.266 01:17:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.266 01:17:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.266 01:17:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.266 01:17:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.266 01:17:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.266 01:17:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.525 01:17:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.525 01:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.525 "name": "raid_bdev1", 00:16:27.525 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:27.525 "strip_size_kb": 0, 00:16:27.525 "state": "online", 00:16:27.525 "raid_level": "raid1", 00:16:27.525 "superblock": true, 00:16:27.525 "num_base_bdevs": 2, 00:16:27.525 "num_base_bdevs_discovered": 2, 00:16:27.525 "num_base_bdevs_operational": 2, 00:16:27.525 "process": { 00:16:27.525 "type": "rebuild", 00:16:27.525 "target": "spare", 00:16:27.526 "progress": { 00:16:27.526 "blocks": 5632, 00:16:27.526 "percent": 70 00:16:27.526 } 00:16:27.526 }, 00:16:27.526 "base_bdevs_list": [ 00:16:27.526 { 00:16:27.526 "name": "spare", 00:16:27.526 "uuid": "15eddaba-c193-54f5-93b6-ecc3133347ac", 00:16:27.526 "is_configured": true, 00:16:27.526 "data_offset": 256, 00:16:27.526 "data_size": 7936 00:16:27.526 }, 00:16:27.526 { 00:16:27.526 "name": "BaseBdev2", 00:16:27.526 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:27.526 "is_configured": true, 00:16:27.526 "data_offset": 256, 00:16:27.526 "data_size": 7936 00:16:27.526 } 00:16:27.526 ] 00:16:27.526 }' 00:16:27.526 01:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.526 01:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.526 01:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.526 01:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.526 01:17:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:28.096 [2024-10-15 01:17:40.805869] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:28.096 [2024-10-15 01:17:40.805966] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:28.096 [2024-10-15 01:17:40.806099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.665 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:28.665 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.665 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.665 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.665 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.665 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.665 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.665 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.665 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.665 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.665 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.665 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.665 "name": "raid_bdev1", 00:16:28.665 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:28.665 "strip_size_kb": 0, 00:16:28.665 "state": "online", 00:16:28.665 "raid_level": "raid1", 00:16:28.665 "superblock": true, 00:16:28.665 "num_base_bdevs": 2, 00:16:28.665 "num_base_bdevs_discovered": 2, 00:16:28.665 "num_base_bdevs_operational": 2, 00:16:28.665 "base_bdevs_list": [ 00:16:28.665 { 00:16:28.665 "name": "spare", 00:16:28.665 "uuid": "15eddaba-c193-54f5-93b6-ecc3133347ac", 00:16:28.665 "is_configured": true, 00:16:28.665 "data_offset": 256, 00:16:28.665 "data_size": 7936 00:16:28.665 }, 00:16:28.665 { 00:16:28.665 "name": "BaseBdev2", 00:16:28.665 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:28.665 "is_configured": true, 00:16:28.665 "data_offset": 256, 00:16:28.665 "data_size": 7936 00:16:28.665 } 00:16:28.665 ] 00:16:28.665 }' 00:16:28.665 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.665 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:28.665 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.665 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.666 "name": "raid_bdev1", 00:16:28.666 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:28.666 "strip_size_kb": 0, 00:16:28.666 "state": "online", 00:16:28.666 "raid_level": "raid1", 00:16:28.666 "superblock": true, 00:16:28.666 "num_base_bdevs": 2, 00:16:28.666 "num_base_bdevs_discovered": 2, 00:16:28.666 "num_base_bdevs_operational": 2, 00:16:28.666 "base_bdevs_list": [ 00:16:28.666 { 00:16:28.666 "name": "spare", 00:16:28.666 "uuid": "15eddaba-c193-54f5-93b6-ecc3133347ac", 00:16:28.666 "is_configured": true, 00:16:28.666 "data_offset": 256, 00:16:28.666 "data_size": 7936 00:16:28.666 }, 00:16:28.666 { 00:16:28.666 "name": "BaseBdev2", 00:16:28.666 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:28.666 "is_configured": true, 00:16:28.666 "data_offset": 256, 00:16:28.666 "data_size": 7936 00:16:28.666 } 00:16:28.666 ] 00:16:28.666 }' 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.666 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.925 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.925 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.925 "name": "raid_bdev1", 00:16:28.925 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:28.925 "strip_size_kb": 0, 00:16:28.925 "state": "online", 00:16:28.925 "raid_level": "raid1", 00:16:28.925 "superblock": true, 00:16:28.925 "num_base_bdevs": 2, 00:16:28.925 "num_base_bdevs_discovered": 2, 00:16:28.925 "num_base_bdevs_operational": 2, 00:16:28.925 "base_bdevs_list": [ 00:16:28.925 { 00:16:28.925 "name": "spare", 00:16:28.925 "uuid": "15eddaba-c193-54f5-93b6-ecc3133347ac", 00:16:28.925 "is_configured": true, 00:16:28.925 "data_offset": 256, 00:16:28.925 "data_size": 7936 00:16:28.925 }, 00:16:28.925 { 00:16:28.925 "name": "BaseBdev2", 00:16:28.925 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:28.925 "is_configured": true, 00:16:28.925 "data_offset": 256, 00:16:28.925 "data_size": 7936 00:16:28.925 } 00:16:28.925 ] 00:16:28.925 }' 00:16:28.925 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.925 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.184 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:29.184 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.184 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.185 [2024-10-15 01:17:41.792651] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.185 [2024-10-15 01:17:41.792683] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:29.185 [2024-10-15 01:17:41.792781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.185 [2024-10-15 01:17:41.792851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.185 [2024-10-15 01:17:41.792864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:29.185 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.185 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.185 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.185 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.185 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:16:29.185 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.185 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:29.185 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:16:29.185 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:29.185 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:29.185 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.185 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.185 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.185 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:29.185 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.185 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.185 [2024-10-15 01:17:41.864530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:29.185 [2024-10-15 01:17:41.864668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.185 [2024-10-15 01:17:41.864712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:29.185 [2024-10-15 01:17:41.864747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.185 [2024-10-15 01:17:41.866698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.185 [2024-10-15 01:17:41.866772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:29.185 [2024-10-15 01:17:41.866838] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:29.185 [2024-10-15 01:17:41.866896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:29.185 [2024-10-15 01:17:41.867000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:29.185 spare 00:16:29.185 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.185 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:29.185 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.185 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.445 [2024-10-15 01:17:41.966911] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:16:29.445 [2024-10-15 01:17:41.966952] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:29.445 [2024-10-15 01:17:41.967120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:16:29.445 [2024-10-15 01:17:41.967256] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:16:29.445 [2024-10-15 01:17:41.967273] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:16:29.445 [2024-10-15 01:17:41.967388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.445 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.445 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:29.445 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.445 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.445 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.445 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.445 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:29.445 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.445 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.445 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.445 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.445 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.445 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.445 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.445 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.445 01:17:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.445 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.445 "name": "raid_bdev1", 00:16:29.445 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:29.445 "strip_size_kb": 0, 00:16:29.445 "state": "online", 00:16:29.445 "raid_level": "raid1", 00:16:29.445 "superblock": true, 00:16:29.445 "num_base_bdevs": 2, 00:16:29.445 "num_base_bdevs_discovered": 2, 00:16:29.445 "num_base_bdevs_operational": 2, 00:16:29.445 "base_bdevs_list": [ 00:16:29.445 { 00:16:29.445 "name": "spare", 00:16:29.445 "uuid": "15eddaba-c193-54f5-93b6-ecc3133347ac", 00:16:29.445 "is_configured": true, 00:16:29.445 "data_offset": 256, 00:16:29.445 "data_size": 7936 00:16:29.445 }, 00:16:29.445 { 00:16:29.445 "name": "BaseBdev2", 00:16:29.445 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:29.445 "is_configured": true, 00:16:29.445 "data_offset": 256, 00:16:29.445 "data_size": 7936 00:16:29.445 } 00:16:29.445 ] 00:16:29.445 }' 00:16:29.445 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.445 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.705 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.705 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.705 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.705 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.705 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.705 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.705 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.705 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.705 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.705 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.965 "name": "raid_bdev1", 00:16:29.965 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:29.965 "strip_size_kb": 0, 00:16:29.965 "state": "online", 00:16:29.965 "raid_level": "raid1", 00:16:29.965 "superblock": true, 00:16:29.965 "num_base_bdevs": 2, 00:16:29.965 "num_base_bdevs_discovered": 2, 00:16:29.965 "num_base_bdevs_operational": 2, 00:16:29.965 "base_bdevs_list": [ 00:16:29.965 { 00:16:29.965 "name": "spare", 00:16:29.965 "uuid": "15eddaba-c193-54f5-93b6-ecc3133347ac", 00:16:29.965 "is_configured": true, 00:16:29.965 "data_offset": 256, 00:16:29.965 "data_size": 7936 00:16:29.965 }, 00:16:29.965 { 00:16:29.965 "name": "BaseBdev2", 00:16:29.965 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:29.965 "is_configured": true, 00:16:29.965 "data_offset": 256, 00:16:29.965 "data_size": 7936 00:16:29.965 } 00:16:29.965 ] 00:16:29.965 }' 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.965 [2024-10-15 01:17:42.599398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.965 "name": "raid_bdev1", 00:16:29.965 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:29.965 "strip_size_kb": 0, 00:16:29.965 "state": "online", 00:16:29.965 "raid_level": "raid1", 00:16:29.965 "superblock": true, 00:16:29.965 "num_base_bdevs": 2, 00:16:29.965 "num_base_bdevs_discovered": 1, 00:16:29.965 "num_base_bdevs_operational": 1, 00:16:29.965 "base_bdevs_list": [ 00:16:29.965 { 00:16:29.965 "name": null, 00:16:29.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.965 "is_configured": false, 00:16:29.965 "data_offset": 0, 00:16:29.965 "data_size": 7936 00:16:29.965 }, 00:16:29.965 { 00:16:29.965 "name": "BaseBdev2", 00:16:29.965 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:29.965 "is_configured": true, 00:16:29.965 "data_offset": 256, 00:16:29.965 "data_size": 7936 00:16:29.965 } 00:16:29.965 ] 00:16:29.965 }' 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.965 01:17:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.534 01:17:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:30.534 01:17:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.534 01:17:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.534 [2024-10-15 01:17:43.058627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:30.534 [2024-10-15 01:17:43.058879] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:30.534 [2024-10-15 01:17:43.058952] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:30.534 [2024-10-15 01:17:43.059063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:30.534 [2024-10-15 01:17:43.062850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:16:30.534 01:17:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.534 01:17:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:30.534 [2024-10-15 01:17:43.064927] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:31.473 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.474 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.474 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.474 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.474 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.474 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.474 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.474 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.474 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.474 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.474 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.474 "name": "raid_bdev1", 00:16:31.474 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:31.474 "strip_size_kb": 0, 00:16:31.474 "state": "online", 00:16:31.474 "raid_level": "raid1", 00:16:31.474 "superblock": true, 00:16:31.474 "num_base_bdevs": 2, 00:16:31.474 "num_base_bdevs_discovered": 2, 00:16:31.474 "num_base_bdevs_operational": 2, 00:16:31.474 "process": { 00:16:31.474 "type": "rebuild", 00:16:31.474 "target": "spare", 00:16:31.474 "progress": { 00:16:31.474 "blocks": 2560, 00:16:31.474 "percent": 32 00:16:31.474 } 00:16:31.474 }, 00:16:31.474 "base_bdevs_list": [ 00:16:31.474 { 00:16:31.474 "name": "spare", 00:16:31.474 "uuid": "15eddaba-c193-54f5-93b6-ecc3133347ac", 00:16:31.474 "is_configured": true, 00:16:31.474 "data_offset": 256, 00:16:31.474 "data_size": 7936 00:16:31.474 }, 00:16:31.474 { 00:16:31.474 "name": "BaseBdev2", 00:16:31.474 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:31.474 "is_configured": true, 00:16:31.474 "data_offset": 256, 00:16:31.474 "data_size": 7936 00:16:31.474 } 00:16:31.474 ] 00:16:31.474 }' 00:16:31.474 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.474 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.474 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.734 [2024-10-15 01:17:44.226166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:31.734 [2024-10-15 01:17:44.270002] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:31.734 [2024-10-15 01:17:44.270071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.734 [2024-10-15 01:17:44.270088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:31.734 [2024-10-15 01:17:44.270095] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.734 "name": "raid_bdev1", 00:16:31.734 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:31.734 "strip_size_kb": 0, 00:16:31.734 "state": "online", 00:16:31.734 "raid_level": "raid1", 00:16:31.734 "superblock": true, 00:16:31.734 "num_base_bdevs": 2, 00:16:31.734 "num_base_bdevs_discovered": 1, 00:16:31.734 "num_base_bdevs_operational": 1, 00:16:31.734 "base_bdevs_list": [ 00:16:31.734 { 00:16:31.734 "name": null, 00:16:31.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.734 "is_configured": false, 00:16:31.734 "data_offset": 0, 00:16:31.734 "data_size": 7936 00:16:31.734 }, 00:16:31.734 { 00:16:31.734 "name": "BaseBdev2", 00:16:31.734 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:31.734 "is_configured": true, 00:16:31.734 "data_offset": 256, 00:16:31.734 "data_size": 7936 00:16:31.734 } 00:16:31.734 ] 00:16:31.734 }' 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.734 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.994 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:31.994 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.994 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.994 [2024-10-15 01:17:44.709548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:31.994 [2024-10-15 01:17:44.709708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.994 [2024-10-15 01:17:44.709750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:31.994 [2024-10-15 01:17:44.709794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.994 [2024-10-15 01:17:44.710011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.994 [2024-10-15 01:17:44.710062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:31.994 [2024-10-15 01:17:44.710151] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:31.994 [2024-10-15 01:17:44.710203] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:31.994 [2024-10-15 01:17:44.710268] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:31.994 [2024-10-15 01:17:44.710348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:31.994 [2024-10-15 01:17:44.714029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:16:31.994 spare 00:16:31.994 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.994 01:17:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:31.994 [2024-10-15 01:17:44.715992] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.374 "name": "raid_bdev1", 00:16:33.374 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:33.374 "strip_size_kb": 0, 00:16:33.374 "state": "online", 00:16:33.374 "raid_level": "raid1", 00:16:33.374 "superblock": true, 00:16:33.374 "num_base_bdevs": 2, 00:16:33.374 "num_base_bdevs_discovered": 2, 00:16:33.374 "num_base_bdevs_operational": 2, 00:16:33.374 "process": { 00:16:33.374 "type": "rebuild", 00:16:33.374 "target": "spare", 00:16:33.374 "progress": { 00:16:33.374 "blocks": 2560, 00:16:33.374 "percent": 32 00:16:33.374 } 00:16:33.374 }, 00:16:33.374 "base_bdevs_list": [ 00:16:33.374 { 00:16:33.374 "name": "spare", 00:16:33.374 "uuid": "15eddaba-c193-54f5-93b6-ecc3133347ac", 00:16:33.374 "is_configured": true, 00:16:33.374 "data_offset": 256, 00:16:33.374 "data_size": 7936 00:16:33.374 }, 00:16:33.374 { 00:16:33.374 "name": "BaseBdev2", 00:16:33.374 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:33.374 "is_configured": true, 00:16:33.374 "data_offset": 256, 00:16:33.374 "data_size": 7936 00:16:33.374 } 00:16:33.374 ] 00:16:33.374 }' 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.374 [2024-10-15 01:17:45.881227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.374 [2024-10-15 01:17:45.921098] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:33.374 [2024-10-15 01:17:45.921251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.374 [2024-10-15 01:17:45.921270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.374 [2024-10-15 01:17:45.921280] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.374 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.375 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.375 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.375 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.375 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.375 "name": "raid_bdev1", 00:16:33.375 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:33.375 "strip_size_kb": 0, 00:16:33.375 "state": "online", 00:16:33.375 "raid_level": "raid1", 00:16:33.375 "superblock": true, 00:16:33.375 "num_base_bdevs": 2, 00:16:33.375 "num_base_bdevs_discovered": 1, 00:16:33.375 "num_base_bdevs_operational": 1, 00:16:33.375 "base_bdevs_list": [ 00:16:33.375 { 00:16:33.375 "name": null, 00:16:33.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.375 "is_configured": false, 00:16:33.375 "data_offset": 0, 00:16:33.375 "data_size": 7936 00:16:33.375 }, 00:16:33.375 { 00:16:33.375 "name": "BaseBdev2", 00:16:33.375 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:33.375 "is_configured": true, 00:16:33.375 "data_offset": 256, 00:16:33.375 "data_size": 7936 00:16:33.375 } 00:16:33.375 ] 00:16:33.375 }' 00:16:33.375 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.375 01:17:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.634 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:33.634 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.634 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:33.634 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:33.893 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.893 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.893 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.893 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.893 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.893 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.894 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.894 "name": "raid_bdev1", 00:16:33.894 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:33.894 "strip_size_kb": 0, 00:16:33.894 "state": "online", 00:16:33.894 "raid_level": "raid1", 00:16:33.894 "superblock": true, 00:16:33.894 "num_base_bdevs": 2, 00:16:33.894 "num_base_bdevs_discovered": 1, 00:16:33.894 "num_base_bdevs_operational": 1, 00:16:33.894 "base_bdevs_list": [ 00:16:33.894 { 00:16:33.894 "name": null, 00:16:33.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.894 "is_configured": false, 00:16:33.894 "data_offset": 0, 00:16:33.894 "data_size": 7936 00:16:33.894 }, 00:16:33.894 { 00:16:33.894 "name": "BaseBdev2", 00:16:33.894 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:33.894 "is_configured": true, 00:16:33.894 "data_offset": 256, 00:16:33.894 "data_size": 7936 00:16:33.894 } 00:16:33.894 ] 00:16:33.894 }' 00:16:33.894 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.894 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:33.894 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.894 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:33.894 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:33.894 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.894 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.894 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.894 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:33.894 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.894 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.894 [2024-10-15 01:17:46.516468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:33.894 [2024-10-15 01:17:46.516534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.894 [2024-10-15 01:17:46.516555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:33.894 [2024-10-15 01:17:46.516566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.894 [2024-10-15 01:17:46.516748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.894 [2024-10-15 01:17:46.516764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:33.894 [2024-10-15 01:17:46.516815] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:33.894 [2024-10-15 01:17:46.516831] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:33.894 [2024-10-15 01:17:46.516839] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:33.894 [2024-10-15 01:17:46.516853] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:33.894 BaseBdev1 00:16:33.894 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.894 01:17:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:34.834 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:34.834 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.834 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.834 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.834 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.834 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:34.834 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.834 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.834 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.834 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.834 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.834 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.834 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.834 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.834 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.094 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.094 "name": "raid_bdev1", 00:16:35.094 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:35.094 "strip_size_kb": 0, 00:16:35.094 "state": "online", 00:16:35.094 "raid_level": "raid1", 00:16:35.094 "superblock": true, 00:16:35.094 "num_base_bdevs": 2, 00:16:35.094 "num_base_bdevs_discovered": 1, 00:16:35.094 "num_base_bdevs_operational": 1, 00:16:35.094 "base_bdevs_list": [ 00:16:35.094 { 00:16:35.094 "name": null, 00:16:35.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.094 "is_configured": false, 00:16:35.094 "data_offset": 0, 00:16:35.094 "data_size": 7936 00:16:35.094 }, 00:16:35.094 { 00:16:35.094 "name": "BaseBdev2", 00:16:35.094 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:35.094 "is_configured": true, 00:16:35.094 "data_offset": 256, 00:16:35.094 "data_size": 7936 00:16:35.094 } 00:16:35.094 ] 00:16:35.094 }' 00:16:35.094 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.094 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.354 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:35.354 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.354 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:35.354 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:35.354 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.354 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.354 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.354 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.354 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.354 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.354 01:17:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.354 "name": "raid_bdev1", 00:16:35.354 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:35.354 "strip_size_kb": 0, 00:16:35.354 "state": "online", 00:16:35.354 "raid_level": "raid1", 00:16:35.354 "superblock": true, 00:16:35.354 "num_base_bdevs": 2, 00:16:35.354 "num_base_bdevs_discovered": 1, 00:16:35.354 "num_base_bdevs_operational": 1, 00:16:35.354 "base_bdevs_list": [ 00:16:35.354 { 00:16:35.354 "name": null, 00:16:35.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.354 "is_configured": false, 00:16:35.354 "data_offset": 0, 00:16:35.354 "data_size": 7936 00:16:35.354 }, 00:16:35.354 { 00:16:35.354 "name": "BaseBdev2", 00:16:35.354 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:35.354 "is_configured": true, 00:16:35.354 "data_offset": 256, 00:16:35.354 "data_size": 7936 00:16:35.354 } 00:16:35.354 ] 00:16:35.354 }' 00:16:35.354 01:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.354 01:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:35.354 01:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.614 01:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:35.614 01:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:35.614 01:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:16:35.614 01:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:35.614 01:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:35.614 01:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:35.614 01:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:35.614 01:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:35.614 01:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:35.614 01:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.614 01:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.614 [2024-10-15 01:17:48.117769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.614 [2024-10-15 01:17:48.117999] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:35.614 [2024-10-15 01:17:48.118069] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:35.614 request: 00:16:35.614 { 00:16:35.614 "base_bdev": "BaseBdev1", 00:16:35.614 "raid_bdev": "raid_bdev1", 00:16:35.614 "method": "bdev_raid_add_base_bdev", 00:16:35.614 "req_id": 1 00:16:35.614 } 00:16:35.614 Got JSON-RPC error response 00:16:35.614 response: 00:16:35.614 { 00:16:35.614 "code": -22, 00:16:35.614 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:35.614 } 00:16:35.614 01:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:35.614 01:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:16:35.614 01:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:35.614 01:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:35.614 01:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:35.614 01:17:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:36.552 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:36.552 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.552 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.552 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.552 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.552 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:36.552 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.552 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.552 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.552 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.552 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.552 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.552 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.552 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:36.552 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.552 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.552 "name": "raid_bdev1", 00:16:36.552 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:36.552 "strip_size_kb": 0, 00:16:36.552 "state": "online", 00:16:36.553 "raid_level": "raid1", 00:16:36.553 "superblock": true, 00:16:36.553 "num_base_bdevs": 2, 00:16:36.553 "num_base_bdevs_discovered": 1, 00:16:36.553 "num_base_bdevs_operational": 1, 00:16:36.553 "base_bdevs_list": [ 00:16:36.553 { 00:16:36.553 "name": null, 00:16:36.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.553 "is_configured": false, 00:16:36.553 "data_offset": 0, 00:16:36.553 "data_size": 7936 00:16:36.553 }, 00:16:36.553 { 00:16:36.553 "name": "BaseBdev2", 00:16:36.553 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:36.553 "is_configured": true, 00:16:36.553 "data_offset": 256, 00:16:36.553 "data_size": 7936 00:16:36.553 } 00:16:36.553 ] 00:16:36.553 }' 00:16:36.553 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.553 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:37.122 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:37.122 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.122 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:37.122 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:37.122 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.122 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.122 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.122 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.122 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:37.122 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.122 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.122 "name": "raid_bdev1", 00:16:37.122 "uuid": "98eeebce-5080-49ae-93a6-4464fc9e9e17", 00:16:37.122 "strip_size_kb": 0, 00:16:37.122 "state": "online", 00:16:37.122 "raid_level": "raid1", 00:16:37.122 "superblock": true, 00:16:37.122 "num_base_bdevs": 2, 00:16:37.122 "num_base_bdevs_discovered": 1, 00:16:37.122 "num_base_bdevs_operational": 1, 00:16:37.122 "base_bdevs_list": [ 00:16:37.122 { 00:16:37.122 "name": null, 00:16:37.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.122 "is_configured": false, 00:16:37.122 "data_offset": 0, 00:16:37.122 "data_size": 7936 00:16:37.122 }, 00:16:37.122 { 00:16:37.122 "name": "BaseBdev2", 00:16:37.122 "uuid": "d1ee8960-318d-5025-a828-669ddd9953c3", 00:16:37.122 "is_configured": true, 00:16:37.122 "data_offset": 256, 00:16:37.122 "data_size": 7936 00:16:37.122 } 00:16:37.122 ] 00:16:37.122 }' 00:16:37.122 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.122 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:37.122 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.122 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:37.122 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 99041 00:16:37.122 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99041 ']' 00:16:37.122 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99041 00:16:37.123 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:37.123 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:37.123 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99041 00:16:37.123 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:37.123 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:37.123 killing process with pid 99041 00:16:37.123 Received shutdown signal, test time was about 60.000000 seconds 00:16:37.123 00:16:37.123 Latency(us) 00:16:37.123 [2024-10-15T01:17:49.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.123 [2024-10-15T01:17:49.847Z] =================================================================================================================== 00:16:37.123 [2024-10-15T01:17:49.847Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:37.123 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99041' 00:16:37.123 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99041 00:16:37.123 [2024-10-15 01:17:49.755677] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:37.123 [2024-10-15 01:17:49.755809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.123 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99041 00:16:37.123 [2024-10-15 01:17:49.755861] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.123 [2024-10-15 01:17:49.755871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:16:37.123 [2024-10-15 01:17:49.789606] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:37.383 ************************************ 00:16:37.383 END TEST raid_rebuild_test_sb_md_interleaved 00:16:37.383 ************************************ 00:16:37.383 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:16:37.383 00:16:37.383 real 0m16.219s 00:16:37.383 user 0m21.729s 00:16:37.383 sys 0m1.646s 00:16:37.383 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:37.383 01:17:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:37.383 01:17:50 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:16:37.383 01:17:50 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:16:37.383 01:17:50 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 99041 ']' 00:16:37.383 01:17:50 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 99041 00:16:37.383 01:17:50 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:16:37.383 00:16:37.383 real 9m51.962s 00:16:37.383 user 14m7.367s 00:16:37.383 sys 1m44.496s 00:16:37.383 ************************************ 00:16:37.383 END TEST bdev_raid 00:16:37.383 ************************************ 00:16:37.383 01:17:50 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:37.383 01:17:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:37.643 01:17:50 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:37.643 01:17:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:37.643 01:17:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:37.643 01:17:50 -- common/autotest_common.sh@10 -- # set +x 00:16:37.643 ************************************ 00:16:37.643 START TEST spdkcli_raid 00:16:37.643 ************************************ 00:16:37.643 01:17:50 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:37.643 * Looking for test storage... 00:16:37.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:37.643 01:17:50 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:37.643 01:17:50 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:16:37.643 01:17:50 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:37.643 01:17:50 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:37.643 01:17:50 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:16:37.643 01:17:50 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:37.903 01:17:50 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:37.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.903 --rc genhtml_branch_coverage=1 00:16:37.903 --rc genhtml_function_coverage=1 00:16:37.903 --rc genhtml_legend=1 00:16:37.903 --rc geninfo_all_blocks=1 00:16:37.903 --rc geninfo_unexecuted_blocks=1 00:16:37.903 00:16:37.903 ' 00:16:37.903 01:17:50 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:37.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.903 --rc genhtml_branch_coverage=1 00:16:37.903 --rc genhtml_function_coverage=1 00:16:37.903 --rc genhtml_legend=1 00:16:37.903 --rc geninfo_all_blocks=1 00:16:37.903 --rc geninfo_unexecuted_blocks=1 00:16:37.903 00:16:37.903 ' 00:16:37.903 01:17:50 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:37.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.903 --rc genhtml_branch_coverage=1 00:16:37.903 --rc genhtml_function_coverage=1 00:16:37.903 --rc genhtml_legend=1 00:16:37.903 --rc geninfo_all_blocks=1 00:16:37.903 --rc geninfo_unexecuted_blocks=1 00:16:37.903 00:16:37.903 ' 00:16:37.903 01:17:50 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:37.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.903 --rc genhtml_branch_coverage=1 00:16:37.903 --rc genhtml_function_coverage=1 00:16:37.903 --rc genhtml_legend=1 00:16:37.903 --rc geninfo_all_blocks=1 00:16:37.903 --rc geninfo_unexecuted_blocks=1 00:16:37.903 00:16:37.903 ' 00:16:37.903 01:17:50 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:37.903 01:17:50 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:37.903 01:17:50 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:37.903 01:17:50 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:16:37.903 01:17:50 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:16:37.903 01:17:50 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:16:37.903 01:17:50 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:16:37.903 01:17:50 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:16:37.903 01:17:50 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:16:37.903 01:17:50 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:16:37.903 01:17:50 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:16:37.903 01:17:50 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:16:37.903 01:17:50 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:16:37.903 01:17:50 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:16:37.903 01:17:50 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:16:37.903 01:17:50 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:16:37.903 01:17:50 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:16:37.903 01:17:50 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:16:37.903 01:17:50 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:16:37.903 01:17:50 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:16:37.903 01:17:50 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:16:37.903 01:17:50 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:16:37.904 01:17:50 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:16:37.904 01:17:50 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:16:37.904 01:17:50 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:16:37.904 01:17:50 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:37.904 01:17:50 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:37.904 01:17:50 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:37.904 01:17:50 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:37.904 01:17:50 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:37.904 01:17:50 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:37.904 01:17:50 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:16:37.904 01:17:50 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:16:37.904 01:17:50 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:37.904 01:17:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:37.904 01:17:50 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:16:37.904 01:17:50 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=99712 00:16:37.904 01:17:50 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:37.904 01:17:50 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 99712 00:16:37.904 01:17:50 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 99712 ']' 00:16:37.904 01:17:50 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.904 01:17:50 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:37.904 01:17:50 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.904 01:17:50 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:37.904 01:17:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:37.904 [2024-10-15 01:17:50.494609] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:16:37.904 [2024-10-15 01:17:50.494815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99712 ] 00:16:38.164 [2024-10-15 01:17:50.642325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:38.164 [2024-10-15 01:17:50.673827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.164 [2024-10-15 01:17:50.673925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.732 01:17:51 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:38.732 01:17:51 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:16:38.732 01:17:51 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:16:38.732 01:17:51 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:38.732 01:17:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:38.732 01:17:51 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:16:38.732 01:17:51 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:38.732 01:17:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:38.732 01:17:51 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:16:38.732 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:16:38.732 ' 00:16:40.640 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:16:40.640 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:16:40.640 01:17:52 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:16:40.640 01:17:52 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:40.640 01:17:52 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:40.640 01:17:53 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:16:40.640 01:17:53 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:40.640 01:17:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:40.640 01:17:53 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:16:40.640 ' 00:16:41.594 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:16:41.594 01:17:54 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:16:41.594 01:17:54 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:41.594 01:17:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:41.594 01:17:54 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:16:41.594 01:17:54 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:41.594 01:17:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:41.594 01:17:54 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:16:41.594 01:17:54 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:16:42.163 01:17:54 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:16:42.163 01:17:54 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:16:42.163 01:17:54 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:16:42.163 01:17:54 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:42.163 01:17:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:42.163 01:17:54 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:16:42.163 01:17:54 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:42.163 01:17:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:42.163 01:17:54 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:16:42.163 ' 00:16:43.544 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:16:43.544 01:17:55 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:16:43.544 01:17:55 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:43.544 01:17:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:43.544 01:17:56 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:16:43.544 01:17:56 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:43.544 01:17:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:43.544 01:17:56 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:16:43.544 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:16:43.544 ' 00:16:44.924 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:16:44.924 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:16:44.924 01:17:57 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:16:44.924 01:17:57 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:44.924 01:17:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:44.924 01:17:57 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 99712 00:16:44.925 01:17:57 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 99712 ']' 00:16:44.925 01:17:57 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 99712 00:16:44.925 01:17:57 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:16:44.925 01:17:57 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:44.925 01:17:57 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99712 00:16:44.925 01:17:57 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:44.925 01:17:57 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:44.925 01:17:57 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99712' 00:16:44.925 killing process with pid 99712 00:16:44.925 01:17:57 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 99712 00:16:44.925 01:17:57 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 99712 00:16:45.494 01:17:57 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:16:45.494 01:17:57 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 99712 ']' 00:16:45.494 01:17:57 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 99712 00:16:45.494 01:17:57 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 99712 ']' 00:16:45.494 01:17:57 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 99712 00:16:45.494 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (99712) - No such process 00:16:45.494 01:17:57 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 99712 is not found' 00:16:45.494 Process with pid 99712 is not found 00:16:45.494 01:17:57 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:16:45.494 01:17:57 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:16:45.494 01:17:57 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:16:45.494 01:17:57 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:16:45.494 ************************************ 00:16:45.494 END TEST spdkcli_raid 00:16:45.494 ************************************ 00:16:45.494 00:16:45.494 real 0m7.792s 00:16:45.494 user 0m16.595s 00:16:45.494 sys 0m1.105s 00:16:45.494 01:17:57 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:45.494 01:17:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:45.494 01:17:57 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:45.494 01:17:57 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:45.494 01:17:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:45.494 01:17:57 -- common/autotest_common.sh@10 -- # set +x 00:16:45.494 ************************************ 00:16:45.494 START TEST blockdev_raid5f 00:16:45.494 ************************************ 00:16:45.494 01:17:58 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:45.494 * Looking for test storage... 00:16:45.494 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:45.494 01:17:58 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:45.494 01:17:58 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:16:45.494 01:17:58 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:45.494 01:17:58 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:45.494 01:17:58 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:16:45.753 01:17:58 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:16:45.753 01:17:58 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:45.753 01:17:58 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:45.753 01:17:58 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:16:45.753 01:17:58 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:45.753 01:17:58 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:45.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.753 --rc genhtml_branch_coverage=1 00:16:45.753 --rc genhtml_function_coverage=1 00:16:45.753 --rc genhtml_legend=1 00:16:45.753 --rc geninfo_all_blocks=1 00:16:45.753 --rc geninfo_unexecuted_blocks=1 00:16:45.753 00:16:45.753 ' 00:16:45.753 01:17:58 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:45.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.753 --rc genhtml_branch_coverage=1 00:16:45.753 --rc genhtml_function_coverage=1 00:16:45.753 --rc genhtml_legend=1 00:16:45.753 --rc geninfo_all_blocks=1 00:16:45.753 --rc geninfo_unexecuted_blocks=1 00:16:45.753 00:16:45.753 ' 00:16:45.753 01:17:58 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:45.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.753 --rc genhtml_branch_coverage=1 00:16:45.753 --rc genhtml_function_coverage=1 00:16:45.753 --rc genhtml_legend=1 00:16:45.753 --rc geninfo_all_blocks=1 00:16:45.753 --rc geninfo_unexecuted_blocks=1 00:16:45.753 00:16:45.753 ' 00:16:45.753 01:17:58 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:45.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.753 --rc genhtml_branch_coverage=1 00:16:45.753 --rc genhtml_function_coverage=1 00:16:45.753 --rc genhtml_legend=1 00:16:45.753 --rc geninfo_all_blocks=1 00:16:45.753 --rc geninfo_unexecuted_blocks=1 00:16:45.753 00:16:45.753 ' 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=99970 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 99970 00:16:45.754 01:17:58 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:45.754 01:17:58 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 99970 ']' 00:16:45.754 01:17:58 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.754 01:17:58 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:45.754 01:17:58 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.754 01:17:58 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:45.754 01:17:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:45.754 [2024-10-15 01:17:58.335194] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:16:45.754 [2024-10-15 01:17:58.335406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99970 ] 00:16:46.013 [2024-10-15 01:17:58.482265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.013 [2024-10-15 01:17:58.511869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.583 01:17:59 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:46.583 01:17:59 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:16:46.583 01:17:59 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:16:46.583 01:17:59 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:16:46.583 01:17:59 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:16:46.583 01:17:59 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.583 01:17:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:46.583 Malloc0 00:16:46.583 Malloc1 00:16:46.583 Malloc2 00:16:46.583 01:17:59 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.583 01:17:59 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:16:46.583 01:17:59 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.583 01:17:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:46.583 01:17:59 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.583 01:17:59 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:16:46.583 01:17:59 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:16:46.583 01:17:59 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.583 01:17:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:46.583 01:17:59 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.583 01:17:59 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:16:46.583 01:17:59 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.583 01:17:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:46.583 01:17:59 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.584 01:17:59 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:46.584 01:17:59 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.584 01:17:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:46.584 01:17:59 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.584 01:17:59 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:16:46.843 01:17:59 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:16:46.843 01:17:59 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:16:46.843 01:17:59 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.843 01:17:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:46.843 01:17:59 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.843 01:17:59 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:16:46.844 01:17:59 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:16:46.844 01:17:59 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "6a1c06b9-c9a9-4ad2-940c-db2b8bfc2fcd"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6a1c06b9-c9a9-4ad2-940c-db2b8bfc2fcd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "6a1c06b9-c9a9-4ad2-940c-db2b8bfc2fcd",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "0629dfbb-c454-40cd-81d0-f74a146a7c09",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "08adb6cf-2a1f-4c5c-9217-8cc8891c9b2c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "e1339bf3-4d04-4d0a-a792-269f37919779",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:16:46.844 01:17:59 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:16:46.844 01:17:59 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:16:46.844 01:17:59 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:16:46.844 01:17:59 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 99970 00:16:46.844 01:17:59 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 99970 ']' 00:16:46.844 01:17:59 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 99970 00:16:46.844 01:17:59 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:16:46.844 01:17:59 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:46.844 01:17:59 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99970 00:16:46.844 killing process with pid 99970 00:16:46.844 01:17:59 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:46.844 01:17:59 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:46.844 01:17:59 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99970' 00:16:46.844 01:17:59 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 99970 00:16:46.844 01:17:59 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 99970 00:16:47.104 01:17:59 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:47.104 01:17:59 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:47.104 01:17:59 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:47.104 01:17:59 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:47.104 01:17:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:47.363 ************************************ 00:16:47.363 START TEST bdev_hello_world 00:16:47.363 ************************************ 00:16:47.363 01:17:59 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:47.363 [2024-10-15 01:17:59.905416] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:16:47.363 [2024-10-15 01:17:59.905615] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100011 ] 00:16:47.363 [2024-10-15 01:18:00.051024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.363 [2024-10-15 01:18:00.080673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.623 [2024-10-15 01:18:00.257612] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:47.623 [2024-10-15 01:18:00.257669] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:16:47.623 [2024-10-15 01:18:00.257686] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:47.623 [2024-10-15 01:18:00.258002] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:47.623 [2024-10-15 01:18:00.258125] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:47.623 [2024-10-15 01:18:00.258142] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:47.623 [2024-10-15 01:18:00.258212] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:47.623 00:16:47.623 [2024-10-15 01:18:00.258244] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:47.883 ************************************ 00:16:47.883 00:16:47.883 real 0m0.653s 00:16:47.883 user 0m0.361s 00:16:47.883 sys 0m0.185s 00:16:47.883 01:18:00 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:47.883 01:18:00 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:47.883 END TEST bdev_hello_world 00:16:47.883 ************************************ 00:16:47.883 01:18:00 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:16:47.883 01:18:00 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:47.883 01:18:00 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:47.883 01:18:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:47.883 ************************************ 00:16:47.883 START TEST bdev_bounds 00:16:47.883 ************************************ 00:16:47.883 01:18:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:16:47.883 01:18:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=100037 00:16:47.883 Process bdevio pid: 100037 00:16:47.883 01:18:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:47.883 01:18:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:47.883 01:18:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 100037' 00:16:47.883 01:18:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 100037 00:16:47.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.883 01:18:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 100037 ']' 00:16:47.883 01:18:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.883 01:18:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:47.883 01:18:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.883 01:18:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:47.883 01:18:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:48.143 [2024-10-15 01:18:00.627621] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:16:48.143 [2024-10-15 01:18:00.627842] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100037 ] 00:16:48.143 [2024-10-15 01:18:00.756707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:48.143 [2024-10-15 01:18:00.787837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.143 [2024-10-15 01:18:00.787923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.143 [2024-10-15 01:18:00.788018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.082 01:18:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:49.082 01:18:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:16:49.082 01:18:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:49.082 I/O targets: 00:16:49.082 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:16:49.082 00:16:49.082 00:16:49.082 CUnit - A unit testing framework for C - Version 2.1-3 00:16:49.082 http://cunit.sourceforge.net/ 00:16:49.082 00:16:49.082 00:16:49.082 Suite: bdevio tests on: raid5f 00:16:49.082 Test: blockdev write read block ...passed 00:16:49.082 Test: blockdev write zeroes read block ...passed 00:16:49.082 Test: blockdev write zeroes read no split ...passed 00:16:49.082 Test: blockdev write zeroes read split ...passed 00:16:49.082 Test: blockdev write zeroes read split partial ...passed 00:16:49.082 Test: blockdev reset ...passed 00:16:49.082 Test: blockdev write read 8 blocks ...passed 00:16:49.082 Test: blockdev write read size > 128k ...passed 00:16:49.082 Test: blockdev write read invalid size ...passed 00:16:49.082 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:49.082 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:49.082 Test: blockdev write read max offset ...passed 00:16:49.082 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:49.082 Test: blockdev writev readv 8 blocks ...passed 00:16:49.082 Test: blockdev writev readv 30 x 1block ...passed 00:16:49.082 Test: blockdev writev readv block ...passed 00:16:49.082 Test: blockdev writev readv size > 128k ...passed 00:16:49.082 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:49.082 Test: blockdev comparev and writev ...passed 00:16:49.082 Test: blockdev nvme passthru rw ...passed 00:16:49.082 Test: blockdev nvme passthru vendor specific ...passed 00:16:49.082 Test: blockdev nvme admin passthru ...passed 00:16:49.082 Test: blockdev copy ...passed 00:16:49.082 00:16:49.082 Run Summary: Type Total Ran Passed Failed Inactive 00:16:49.082 suites 1 1 n/a 0 0 00:16:49.082 tests 23 23 23 0 0 00:16:49.082 asserts 130 130 130 0 n/a 00:16:49.082 00:16:49.082 Elapsed time = 0.323 seconds 00:16:49.082 0 00:16:49.082 01:18:01 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 100037 00:16:49.082 01:18:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 100037 ']' 00:16:49.082 01:18:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 100037 00:16:49.082 01:18:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:16:49.082 01:18:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:49.082 01:18:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100037 00:16:49.082 01:18:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:49.082 01:18:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:49.082 01:18:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100037' 00:16:49.082 killing process with pid 100037 00:16:49.082 01:18:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 100037 00:16:49.082 01:18:01 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 100037 00:16:49.341 01:18:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:49.341 00:16:49.341 real 0m1.482s 00:16:49.341 user 0m3.785s 00:16:49.341 sys 0m0.305s 00:16:49.341 01:18:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:49.341 01:18:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:49.341 ************************************ 00:16:49.341 END TEST bdev_bounds 00:16:49.341 ************************************ 00:16:49.601 01:18:02 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:49.601 01:18:02 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:49.601 01:18:02 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:49.601 01:18:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:49.601 ************************************ 00:16:49.601 START TEST bdev_nbd 00:16:49.601 ************************************ 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=100086 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 100086 /var/tmp/spdk-nbd.sock 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 100086 ']' 00:16:49.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:49.601 01:18:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:49.601 [2024-10-15 01:18:02.192459] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:16:49.601 [2024-10-15 01:18:02.192611] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.861 [2024-10-15 01:18:02.338985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.861 [2024-10-15 01:18:02.369502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.431 01:18:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:50.431 01:18:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:16:50.431 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:16:50.431 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:50.431 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:16:50.431 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:50.431 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:16:50.431 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:50.431 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:16:50.431 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:50.431 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:50.431 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:50.431 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:50.431 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:50.431 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:16:50.692 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:50.692 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:50.692 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:50.692 01:18:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:50.692 01:18:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:50.692 01:18:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:50.692 01:18:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:50.692 01:18:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:50.692 01:18:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:50.692 01:18:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:50.692 01:18:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:50.692 01:18:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:50.692 1+0 records in 00:16:50.692 1+0 records out 00:16:50.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395887 s, 10.3 MB/s 00:16:50.692 01:18:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:50.692 01:18:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:50.692 01:18:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:50.692 01:18:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:50.692 01:18:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:50.692 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:50.692 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:50.692 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:50.952 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:50.952 { 00:16:50.952 "nbd_device": "/dev/nbd0", 00:16:50.952 "bdev_name": "raid5f" 00:16:50.952 } 00:16:50.952 ]' 00:16:50.952 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:50.952 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:50.952 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:50.952 { 00:16:50.952 "nbd_device": "/dev/nbd0", 00:16:50.952 "bdev_name": "raid5f" 00:16:50.952 } 00:16:50.952 ]' 00:16:50.952 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:50.952 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:50.952 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:50.952 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:50.952 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:50.952 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:50.952 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:51.212 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:51.212 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:51.212 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:51.212 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:51.212 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:51.212 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:51.212 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:51.212 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:51.212 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:51.212 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:51.212 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:51.471 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:51.471 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:51.471 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:51.471 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:51.471 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:51.471 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:51.471 01:18:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:51.471 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:51.471 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:51.471 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:51.471 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:51.471 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:51.471 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:51.471 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:51.471 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:16:51.471 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:51.471 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:16:51.471 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:51.471 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:51.471 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:51.471 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:16:51.471 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:51.471 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:51.471 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:51.471 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:51.471 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:51.471 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:51.471 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:16:51.732 /dev/nbd0 00:16:51.732 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:51.732 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:51.732 01:18:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:51.732 01:18:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:51.732 01:18:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:51.732 01:18:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:51.732 01:18:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:51.732 01:18:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:51.732 01:18:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:51.732 01:18:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:51.732 01:18:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:51.732 1+0 records in 00:16:51.732 1+0 records out 00:16:51.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314235 s, 13.0 MB/s 00:16:51.732 01:18:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.732 01:18:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:51.732 01:18:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.732 01:18:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:51.732 01:18:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:51.732 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:51.732 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:51.732 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:51.732 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:51.732 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:51.992 { 00:16:51.992 "nbd_device": "/dev/nbd0", 00:16:51.992 "bdev_name": "raid5f" 00:16:51.992 } 00:16:51.992 ]' 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:51.992 { 00:16:51.992 "nbd_device": "/dev/nbd0", 00:16:51.992 "bdev_name": "raid5f" 00:16:51.992 } 00:16:51.992 ]' 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:51.992 256+0 records in 00:16:51.992 256+0 records out 00:16:51.992 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141969 s, 73.9 MB/s 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:51.992 256+0 records in 00:16:51.992 256+0 records out 00:16:51.992 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275235 s, 38.1 MB/s 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:51.992 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:52.252 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:52.252 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:52.252 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:52.252 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:52.252 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:52.252 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:52.252 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:52.252 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:52.252 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:52.252 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:52.252 01:18:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:52.513 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:52.513 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:52.513 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:52.513 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:52.513 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:52.513 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:52.513 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:52.513 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:52.513 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:52.513 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:52.514 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:52.514 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:52.514 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:52.514 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:52.514 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:52.514 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:52.773 malloc_lvol_verify 00:16:52.773 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:52.773 2687f153-0d51-4668-a09b-6c6ad053a3b0 00:16:53.033 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:53.033 68f334b8-c01a-4bed-8c21-3f6ca73b37c1 00:16:53.033 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:53.293 /dev/nbd0 00:16:53.293 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:53.293 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:53.293 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:53.293 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:53.293 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:53.293 mke2fs 1.47.0 (5-Feb-2023) 00:16:53.293 Discarding device blocks: 0/4096 done 00:16:53.293 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:53.293 00:16:53.293 Allocating group tables: 0/1 done 00:16:53.293 Writing inode tables: 0/1 done 00:16:53.293 Creating journal (1024 blocks): done 00:16:53.293 Writing superblocks and filesystem accounting information: 0/1 done 00:16:53.293 00:16:53.293 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:53.293 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:53.293 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:53.293 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:53.293 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:53.293 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:53.293 01:18:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:53.553 01:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:53.553 01:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:53.553 01:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:53.553 01:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:53.553 01:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:53.553 01:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:53.553 01:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:53.553 01:18:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:53.553 01:18:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 100086 00:16:53.553 01:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 100086 ']' 00:16:53.553 01:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 100086 00:16:53.553 01:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:16:53.553 01:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:53.553 01:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100086 00:16:53.553 killing process with pid 100086 00:16:53.553 01:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:53.553 01:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:53.553 01:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100086' 00:16:53.553 01:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 100086 00:16:53.553 01:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 100086 00:16:53.819 01:18:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:53.819 00:16:53.819 real 0m4.342s 00:16:53.819 user 0m6.410s 00:16:53.819 sys 0m1.205s 00:16:53.819 01:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:53.819 01:18:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:53.819 ************************************ 00:16:53.819 END TEST bdev_nbd 00:16:53.819 ************************************ 00:16:53.819 01:18:06 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:16:53.819 01:18:06 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:16:53.819 01:18:06 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:16:53.819 01:18:06 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:16:53.819 01:18:06 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:53.819 01:18:06 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:53.819 01:18:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:53.819 ************************************ 00:16:53.819 START TEST bdev_fio 00:16:53.819 ************************************ 00:16:53.819 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:16:53.819 01:18:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:54.095 ************************************ 00:16:54.095 START TEST bdev_fio_rw_verify 00:16:54.095 ************************************ 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:54.095 01:18:06 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:54.371 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:54.371 fio-3.35 00:16:54.371 Starting 1 thread 00:17:06.611 00:17:06.611 job_raid5f: (groupid=0, jobs=1): err= 0: pid=100273: Tue Oct 15 01:18:17 2024 00:17:06.611 read: IOPS=11.4k, BW=44.4MiB/s (46.6MB/s)(444MiB/10001msec) 00:17:06.611 slat (nsec): min=18423, max=60322, avg=20851.32, stdev=2012.98 00:17:06.611 clat (usec): min=11, max=374, avg=140.33, stdev=49.99 00:17:06.611 lat (usec): min=31, max=401, avg=161.18, stdev=50.32 00:17:06.611 clat percentiles (usec): 00:17:06.611 | 50.000th=[ 145], 99.000th=[ 243], 99.900th=[ 269], 99.990th=[ 306], 00:17:06.611 | 99.999th=[ 347] 00:17:06.611 write: IOPS=11.9k, BW=46.5MiB/s (48.8MB/s)(460MiB/9882msec); 0 zone resets 00:17:06.611 slat (usec): min=7, max=243, avg=17.99, stdev= 4.10 00:17:06.611 clat (usec): min=60, max=1590, avg=322.00, stdev=49.89 00:17:06.611 lat (usec): min=76, max=1834, avg=339.99, stdev=51.34 00:17:06.611 clat percentiles (usec): 00:17:06.611 | 50.000th=[ 326], 99.000th=[ 437], 99.900th=[ 635], 99.990th=[ 1352], 00:17:06.611 | 99.999th=[ 1516] 00:17:06.611 bw ( KiB/s): min=44496, max=50664, per=98.92%, avg=47128.42, stdev=1674.53, samples=19 00:17:06.611 iops : min=11124, max=12666, avg=11782.11, stdev=418.63, samples=19 00:17:06.611 lat (usec) : 20=0.01%, 50=0.01%, 100=12.01%, 250=40.04%, 500=47.85% 00:17:06.611 lat (usec) : 750=0.06%, 1000=0.02% 00:17:06.611 lat (msec) : 2=0.02% 00:17:06.611 cpu : usr=98.90%, sys=0.52%, ctx=25, majf=0, minf=12503 00:17:06.611 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:06.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.611 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.611 issued rwts: total=113676,117700,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.611 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:06.611 00:17:06.611 Run status group 0 (all jobs): 00:17:06.611 READ: bw=44.4MiB/s (46.6MB/s), 44.4MiB/s-44.4MiB/s (46.6MB/s-46.6MB/s), io=444MiB (466MB), run=10001-10001msec 00:17:06.611 WRITE: bw=46.5MiB/s (48.8MB/s), 46.5MiB/s-46.5MiB/s (48.8MB/s-48.8MB/s), io=460MiB (482MB), run=9882-9882msec 00:17:06.611 ----------------------------------------------------- 00:17:06.611 Suppressions used: 00:17:06.611 count bytes template 00:17:06.611 1 7 /usr/src/fio/parse.c 00:17:06.611 488 46848 /usr/src/fio/iolog.c 00:17:06.611 1 8 libtcmalloc_minimal.so 00:17:06.611 1 904 libcrypto.so 00:17:06.611 ----------------------------------------------------- 00:17:06.611 00:17:06.611 00:17:06.611 real 0m11.174s 00:17:06.611 user 0m11.394s 00:17:06.611 sys 0m0.633s 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:06.611 ************************************ 00:17:06.611 END TEST bdev_fio_rw_verify 00:17:06.611 ************************************ 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "6a1c06b9-c9a9-4ad2-940c-db2b8bfc2fcd"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6a1c06b9-c9a9-4ad2-940c-db2b8bfc2fcd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "6a1c06b9-c9a9-4ad2-940c-db2b8bfc2fcd",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "0629dfbb-c454-40cd-81d0-f74a146a7c09",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "08adb6cf-2a1f-4c5c-9217-8cc8891c9b2c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "e1339bf3-4d04-4d0a-a792-269f37919779",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:06.611 /home/vagrant/spdk_repo/spdk 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:06.611 00:17:06.611 real 0m11.456s 00:17:06.611 user 0m11.500s 00:17:06.611 sys 0m0.773s 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:06.611 01:18:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:06.611 ************************************ 00:17:06.611 END TEST bdev_fio 00:17:06.611 ************************************ 00:17:06.611 01:18:18 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:06.611 01:18:18 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:06.611 01:18:18 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:06.611 01:18:18 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:06.612 01:18:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:06.612 ************************************ 00:17:06.612 START TEST bdev_verify 00:17:06.612 ************************************ 00:17:06.612 01:18:18 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:06.612 [2024-10-15 01:18:18.109308] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:17:06.612 [2024-10-15 01:18:18.109424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100420 ] 00:17:06.612 [2024-10-15 01:18:18.252937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:06.612 [2024-10-15 01:18:18.283577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.612 [2024-10-15 01:18:18.283679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.612 Running I/O for 5 seconds... 00:17:07.810 15913.00 IOPS, 62.16 MiB/s [2024-10-15T01:18:21.476Z] 15757.50 IOPS, 61.55 MiB/s [2024-10-15T01:18:22.857Z] 15592.67 IOPS, 60.91 MiB/s [2024-10-15T01:18:23.796Z] 15790.75 IOPS, 61.68 MiB/s [2024-10-15T01:18:23.796Z] 15569.00 IOPS, 60.82 MiB/s 00:17:11.072 Latency(us) 00:17:11.072 [2024-10-15T01:18:23.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.072 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:11.072 Verification LBA range: start 0x0 length 0x2000 00:17:11.072 raid5f : 5.02 7741.71 30.24 0.00 0.00 24784.90 125.21 23581.51 00:17:11.072 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:11.072 Verification LBA range: start 0x2000 length 0x2000 00:17:11.072 raid5f : 5.02 7814.59 30.53 0.00 0.00 24592.34 348.79 23467.04 00:17:11.072 [2024-10-15T01:18:23.796Z] =================================================================================================================== 00:17:11.072 [2024-10-15T01:18:23.796Z] Total : 15556.30 60.77 0.00 0.00 24688.19 125.21 23581.51 00:17:11.072 00:17:11.072 real 0m5.683s 00:17:11.072 user 0m10.653s 00:17:11.072 sys 0m0.208s 00:17:11.072 01:18:23 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:11.072 01:18:23 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:11.072 ************************************ 00:17:11.072 END TEST bdev_verify 00:17:11.072 ************************************ 00:17:11.072 01:18:23 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:11.072 01:18:23 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:11.072 01:18:23 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:11.072 01:18:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:11.072 ************************************ 00:17:11.072 START TEST bdev_verify_big_io 00:17:11.072 ************************************ 00:17:11.072 01:18:23 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:11.332 [2024-10-15 01:18:23.864207] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:17:11.332 [2024-10-15 01:18:23.864333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100502 ] 00:17:11.332 [2024-10-15 01:18:24.009090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:11.332 [2024-10-15 01:18:24.039261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.332 [2024-10-15 01:18:24.039774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.592 Running I/O for 5 seconds... 00:17:13.913 693.00 IOPS, 43.31 MiB/s [2024-10-15T01:18:27.576Z] 760.00 IOPS, 47.50 MiB/s [2024-10-15T01:18:28.545Z] 761.33 IOPS, 47.58 MiB/s [2024-10-15T01:18:29.484Z] 824.50 IOPS, 51.53 MiB/s [2024-10-15T01:18:29.484Z] 849.80 IOPS, 53.11 MiB/s 00:17:16.760 Latency(us) 00:17:16.760 [2024-10-15T01:18:29.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.760 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:16.760 Verification LBA range: start 0x0 length 0x200 00:17:16.760 raid5f : 5.10 423.49 26.47 0.00 0.00 7459050.83 177.97 329683.28 00:17:16.760 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:16.760 Verification LBA range: start 0x200 length 0x200 00:17:16.760 raid5f : 5.17 429.86 26.87 0.00 0.00 7241899.77 145.77 327851.71 00:17:16.760 [2024-10-15T01:18:29.484Z] =================================================================================================================== 00:17:16.760 [2024-10-15T01:18:29.484Z] Total : 853.35 53.33 0.00 0.00 7348938.39 145.77 329683.28 00:17:17.020 00:17:17.020 real 0m5.831s 00:17:17.020 user 0m10.940s 00:17:17.020 sys 0m0.213s 00:17:17.020 ************************************ 00:17:17.020 END TEST bdev_verify_big_io 00:17:17.020 ************************************ 00:17:17.020 01:18:29 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:17.020 01:18:29 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.020 01:18:29 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:17.020 01:18:29 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:17.020 01:18:29 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:17.020 01:18:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:17.020 ************************************ 00:17:17.020 START TEST bdev_write_zeroes 00:17:17.020 ************************************ 00:17:17.020 01:18:29 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:17.280 [2024-10-15 01:18:29.758884] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:17:17.280 [2024-10-15 01:18:29.759084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100588 ] 00:17:17.280 [2024-10-15 01:18:29.889563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.280 [2024-10-15 01:18:29.919378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.538 Running I/O for 1 seconds... 00:17:18.476 26991.00 IOPS, 105.43 MiB/s 00:17:18.476 Latency(us) 00:17:18.476 [2024-10-15T01:18:31.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.476 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:18.476 raid5f : 1.01 26965.35 105.33 0.00 0.00 4732.38 1516.77 6524.98 00:17:18.476 [2024-10-15T01:18:31.200Z] =================================================================================================================== 00:17:18.476 [2024-10-15T01:18:31.200Z] Total : 26965.35 105.33 0.00 0.00 4732.38 1516.77 6524.98 00:17:18.735 00:17:18.735 real 0m1.647s 00:17:18.735 user 0m1.339s 00:17:18.735 sys 0m0.197s 00:17:18.735 01:18:31 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:18.735 01:18:31 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:18.735 ************************************ 00:17:18.735 END TEST bdev_write_zeroes 00:17:18.735 ************************************ 00:17:18.735 01:18:31 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:18.735 01:18:31 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:18.735 01:18:31 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:18.735 01:18:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:18.735 ************************************ 00:17:18.735 START TEST bdev_json_nonenclosed 00:17:18.735 ************************************ 00:17:18.735 01:18:31 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:18.995 [2024-10-15 01:18:31.472897] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:17:18.995 [2024-10-15 01:18:31.473107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100620 ] 00:17:18.995 [2024-10-15 01:18:31.616428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.995 [2024-10-15 01:18:31.645814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.995 [2024-10-15 01:18:31.646000] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:18.995 [2024-10-15 01:18:31.646055] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:18.995 [2024-10-15 01:18:31.646083] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:19.255 00:17:19.255 real 0m0.339s 00:17:19.255 user 0m0.141s 00:17:19.255 sys 0m0.093s 00:17:19.255 01:18:31 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:19.255 01:18:31 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:19.255 ************************************ 00:17:19.255 END TEST bdev_json_nonenclosed 00:17:19.255 ************************************ 00:17:19.255 01:18:31 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:19.255 01:18:31 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:19.255 01:18:31 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:19.255 01:18:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:19.255 ************************************ 00:17:19.255 START TEST bdev_json_nonarray 00:17:19.255 ************************************ 00:17:19.255 01:18:31 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:19.255 [2024-10-15 01:18:31.879313] Starting SPDK v25.01-pre git sha1 3a02df0b1 / DPDK 22.11.4 initialization... 00:17:19.255 [2024-10-15 01:18:31.879495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100650 ] 00:17:19.515 [2024-10-15 01:18:32.021881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.515 [2024-10-15 01:18:32.051336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.515 [2024-10-15 01:18:32.051523] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:19.515 [2024-10-15 01:18:32.051575] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:19.515 [2024-10-15 01:18:32.051603] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:19.515 00:17:19.515 real 0m0.338s 00:17:19.515 user 0m0.137s 00:17:19.515 sys 0m0.097s 00:17:19.515 01:18:32 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:19.515 ************************************ 00:17:19.515 END TEST bdev_json_nonarray 00:17:19.515 ************************************ 00:17:19.515 01:18:32 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:19.515 01:18:32 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:17:19.515 01:18:32 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:17:19.515 01:18:32 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:17:19.515 01:18:32 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:17:19.515 01:18:32 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:17:19.515 01:18:32 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:19.515 01:18:32 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:19.515 01:18:32 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:17:19.515 01:18:32 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:17:19.515 01:18:32 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:17:19.515 01:18:32 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:17:19.515 ************************************ 00:17:19.515 END TEST blockdev_raid5f 00:17:19.515 ************************************ 00:17:19.515 00:17:19.515 real 0m34.203s 00:17:19.515 user 0m47.214s 00:17:19.515 sys 0m4.267s 00:17:19.515 01:18:32 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:19.515 01:18:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:19.775 01:18:32 -- spdk/autotest.sh@194 -- # uname -s 00:17:19.775 01:18:32 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:17:19.775 01:18:32 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:19.775 01:18:32 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:19.775 01:18:32 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:17:19.775 01:18:32 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:17:19.775 01:18:32 -- spdk/autotest.sh@256 -- # timing_exit lib 00:17:19.775 01:18:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:19.775 01:18:32 -- common/autotest_common.sh@10 -- # set +x 00:17:19.775 01:18:32 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:17:19.775 01:18:32 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:17:19.775 01:18:32 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:17:19.775 01:18:32 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:17:19.775 01:18:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:19.775 01:18:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:19.775 01:18:32 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:17:19.775 01:18:32 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:17:19.775 01:18:32 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:17:19.775 01:18:32 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:19.775 01:18:32 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:17:19.775 01:18:32 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:17:19.775 01:18:32 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:17:19.775 01:18:32 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:17:19.775 01:18:32 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:17:19.775 01:18:32 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:17:19.775 01:18:32 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:17:19.775 01:18:32 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:17:19.775 01:18:32 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:17:19.775 01:18:32 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:17:19.775 01:18:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:19.775 01:18:32 -- common/autotest_common.sh@10 -- # set +x 00:17:19.775 01:18:32 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:17:19.775 01:18:32 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:17:19.775 01:18:32 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:17:19.775 01:18:32 -- common/autotest_common.sh@10 -- # set +x 00:17:21.683 INFO: APP EXITING 00:17:21.683 INFO: killing all VMs 00:17:21.683 INFO: killing vhost app 00:17:21.683 INFO: EXIT DONE 00:17:22.253 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:22.253 Waiting for block devices as requested 00:17:22.253 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:22.512 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:23.451 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:23.451 Cleaning 00:17:23.451 Removing: /var/run/dpdk/spdk0/config 00:17:23.451 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:17:23.451 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:17:23.451 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:17:23.451 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:17:23.451 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:17:23.451 Removing: /var/run/dpdk/spdk0/hugepage_info 00:17:23.451 Removing: /dev/shm/spdk_tgt_trace.pid68903 00:17:23.451 Removing: /var/run/dpdk/spdk0 00:17:23.451 Removing: /var/run/dpdk/spdk_pid100011 00:17:23.451 Removing: /var/run/dpdk/spdk_pid100037 00:17:23.451 Removing: /var/run/dpdk/spdk_pid100258 00:17:23.451 Removing: /var/run/dpdk/spdk_pid100420 00:17:23.451 Removing: /var/run/dpdk/spdk_pid100502 00:17:23.451 Removing: /var/run/dpdk/spdk_pid100588 00:17:23.451 Removing: /var/run/dpdk/spdk_pid100620 00:17:23.451 Removing: /var/run/dpdk/spdk_pid100650 00:17:23.451 Removing: /var/run/dpdk/spdk_pid68740 00:17:23.451 Removing: /var/run/dpdk/spdk_pid68903 00:17:23.451 Removing: /var/run/dpdk/spdk_pid69105 00:17:23.451 Removing: /var/run/dpdk/spdk_pid69192 00:17:23.451 Removing: /var/run/dpdk/spdk_pid69221 00:17:23.451 Removing: /var/run/dpdk/spdk_pid69327 00:17:23.451 Removing: /var/run/dpdk/spdk_pid69345 00:17:23.451 Removing: /var/run/dpdk/spdk_pid69533 00:17:23.451 Removing: /var/run/dpdk/spdk_pid69612 00:17:23.451 Removing: /var/run/dpdk/spdk_pid69686 00:17:23.451 Removing: /var/run/dpdk/spdk_pid69786 00:17:23.451 Removing: /var/run/dpdk/spdk_pid69872 00:17:23.451 Removing: /var/run/dpdk/spdk_pid69906 00:17:23.451 Removing: /var/run/dpdk/spdk_pid69937 00:17:23.451 Removing: /var/run/dpdk/spdk_pid70013 00:17:23.451 Removing: /var/run/dpdk/spdk_pid70108 00:17:23.451 Removing: /var/run/dpdk/spdk_pid70533 00:17:23.451 Removing: /var/run/dpdk/spdk_pid70581 00:17:23.451 Removing: /var/run/dpdk/spdk_pid70627 00:17:23.451 Removing: /var/run/dpdk/spdk_pid70643 00:17:23.451 Removing: /var/run/dpdk/spdk_pid70710 00:17:23.451 Removing: /var/run/dpdk/spdk_pid70726 00:17:23.451 Removing: /var/run/dpdk/spdk_pid70784 00:17:23.451 Removing: /var/run/dpdk/spdk_pid70800 00:17:23.451 Removing: /var/run/dpdk/spdk_pid70842 00:17:23.451 Removing: /var/run/dpdk/spdk_pid70860 00:17:23.451 Removing: /var/run/dpdk/spdk_pid70902 00:17:23.451 Removing: /var/run/dpdk/spdk_pid70920 00:17:23.451 Removing: /var/run/dpdk/spdk_pid71058 00:17:23.451 Removing: /var/run/dpdk/spdk_pid71089 00:17:23.451 Removing: /var/run/dpdk/spdk_pid71178 00:17:23.451 Removing: /var/run/dpdk/spdk_pid72353 00:17:23.451 Removing: /var/run/dpdk/spdk_pid72554 00:17:23.451 Removing: /var/run/dpdk/spdk_pid72683 00:17:23.451 Removing: /var/run/dpdk/spdk_pid73282 00:17:23.451 Removing: /var/run/dpdk/spdk_pid73483 00:17:23.451 Removing: /var/run/dpdk/spdk_pid73612 00:17:23.451 Removing: /var/run/dpdk/spdk_pid74219 00:17:23.451 Removing: /var/run/dpdk/spdk_pid74530 00:17:23.452 Removing: /var/run/dpdk/spdk_pid74659 00:17:23.452 Removing: /var/run/dpdk/spdk_pid76000 00:17:23.452 Removing: /var/run/dpdk/spdk_pid76242 00:17:23.452 Removing: /var/run/dpdk/spdk_pid76371 00:17:23.452 Removing: /var/run/dpdk/spdk_pid77701 00:17:23.452 Removing: /var/run/dpdk/spdk_pid77943 00:17:23.452 Removing: /var/run/dpdk/spdk_pid78072 00:17:23.452 Removing: /var/run/dpdk/spdk_pid79402 00:17:23.452 Removing: /var/run/dpdk/spdk_pid79831 00:17:23.452 Removing: /var/run/dpdk/spdk_pid79966 00:17:23.452 Removing: /var/run/dpdk/spdk_pid81390 00:17:23.452 Removing: /var/run/dpdk/spdk_pid81644 00:17:23.452 Removing: /var/run/dpdk/spdk_pid81773 00:17:23.452 Removing: /var/run/dpdk/spdk_pid83203 00:17:23.452 Removing: /var/run/dpdk/spdk_pid83451 00:17:23.452 Removing: /var/run/dpdk/spdk_pid83580 00:17:23.452 Removing: /var/run/dpdk/spdk_pid85010 00:17:23.452 Removing: /var/run/dpdk/spdk_pid85485 00:17:23.452 Removing: /var/run/dpdk/spdk_pid85615 00:17:23.712 Removing: /var/run/dpdk/spdk_pid85744 00:17:23.712 Removing: /var/run/dpdk/spdk_pid86145 00:17:23.712 Removing: /var/run/dpdk/spdk_pid86855 00:17:23.712 Removing: /var/run/dpdk/spdk_pid87225 00:17:23.712 Removing: /var/run/dpdk/spdk_pid87922 00:17:23.712 Removing: /var/run/dpdk/spdk_pid88356 00:17:23.712 Removing: /var/run/dpdk/spdk_pid89103 00:17:23.712 Removing: /var/run/dpdk/spdk_pid89496 00:17:23.712 Removing: /var/run/dpdk/spdk_pid91403 00:17:23.712 Removing: /var/run/dpdk/spdk_pid91830 00:17:23.712 Removing: /var/run/dpdk/spdk_pid92248 00:17:23.712 Removing: /var/run/dpdk/spdk_pid94281 00:17:23.712 Removing: /var/run/dpdk/spdk_pid94754 00:17:23.712 Removing: /var/run/dpdk/spdk_pid95237 00:17:23.712 Removing: /var/run/dpdk/spdk_pid96276 00:17:23.712 Removing: /var/run/dpdk/spdk_pid96582 00:17:23.712 Removing: /var/run/dpdk/spdk_pid97497 00:17:23.712 Removing: /var/run/dpdk/spdk_pid97814 00:17:23.712 Removing: /var/run/dpdk/spdk_pid98729 00:17:23.712 Removing: /var/run/dpdk/spdk_pid99041 00:17:23.712 Removing: /var/run/dpdk/spdk_pid99712 00:17:23.712 Removing: /var/run/dpdk/spdk_pid99970 00:17:23.712 Clean 00:17:23.712 01:18:36 -- common/autotest_common.sh@1451 -- # return 0 00:17:23.712 01:18:36 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:17:23.712 01:18:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:23.712 01:18:36 -- common/autotest_common.sh@10 -- # set +x 00:17:23.712 01:18:36 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:17:23.712 01:18:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:23.712 01:18:36 -- common/autotest_common.sh@10 -- # set +x 00:17:23.712 01:18:36 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:23.712 01:18:36 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:17:23.712 01:18:36 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:17:23.972 01:18:36 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:17:23.972 01:18:36 -- spdk/autotest.sh@394 -- # hostname 00:17:23.972 01:18:36 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:17:23.972 geninfo: WARNING: invalid characters removed from testname! 00:17:45.924 01:18:57 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:48.463 01:19:00 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:50.373 01:19:02 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:52.277 01:19:04 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:54.185 01:19:06 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:56.087 01:19:08 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:57.990 01:19:10 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:17:58.252 01:19:10 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:17:58.252 01:19:10 -- common/autotest_common.sh@1691 -- $ lcov --version 00:17:58.252 01:19:10 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:17:58.252 01:19:10 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:17:58.252 01:19:10 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:17:58.252 01:19:10 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:17:58.252 01:19:10 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:17:58.252 01:19:10 -- scripts/common.sh@336 -- $ IFS=.-: 00:17:58.252 01:19:10 -- scripts/common.sh@336 -- $ read -ra ver1 00:17:58.252 01:19:10 -- scripts/common.sh@337 -- $ IFS=.-: 00:17:58.252 01:19:10 -- scripts/common.sh@337 -- $ read -ra ver2 00:17:58.252 01:19:10 -- scripts/common.sh@338 -- $ local 'op=<' 00:17:58.252 01:19:10 -- scripts/common.sh@340 -- $ ver1_l=2 00:17:58.252 01:19:10 -- scripts/common.sh@341 -- $ ver2_l=1 00:17:58.252 01:19:10 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:17:58.252 01:19:10 -- scripts/common.sh@344 -- $ case "$op" in 00:17:58.252 01:19:10 -- scripts/common.sh@345 -- $ : 1 00:17:58.252 01:19:10 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:17:58.252 01:19:10 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.252 01:19:10 -- scripts/common.sh@365 -- $ decimal 1 00:17:58.252 01:19:10 -- scripts/common.sh@353 -- $ local d=1 00:17:58.252 01:19:10 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:17:58.252 01:19:10 -- scripts/common.sh@355 -- $ echo 1 00:17:58.252 01:19:10 -- scripts/common.sh@365 -- $ ver1[v]=1 00:17:58.252 01:19:10 -- scripts/common.sh@366 -- $ decimal 2 00:17:58.252 01:19:10 -- scripts/common.sh@353 -- $ local d=2 00:17:58.252 01:19:10 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:17:58.252 01:19:10 -- scripts/common.sh@355 -- $ echo 2 00:17:58.252 01:19:10 -- scripts/common.sh@366 -- $ ver2[v]=2 00:17:58.252 01:19:10 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:17:58.252 01:19:10 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:17:58.252 01:19:10 -- scripts/common.sh@368 -- $ return 0 00:17:58.252 01:19:10 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.252 01:19:10 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:17:58.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.252 --rc genhtml_branch_coverage=1 00:17:58.252 --rc genhtml_function_coverage=1 00:17:58.252 --rc genhtml_legend=1 00:17:58.252 --rc geninfo_all_blocks=1 00:17:58.252 --rc geninfo_unexecuted_blocks=1 00:17:58.252 00:17:58.252 ' 00:17:58.252 01:19:10 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:17:58.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.252 --rc genhtml_branch_coverage=1 00:17:58.252 --rc genhtml_function_coverage=1 00:17:58.252 --rc genhtml_legend=1 00:17:58.252 --rc geninfo_all_blocks=1 00:17:58.252 --rc geninfo_unexecuted_blocks=1 00:17:58.252 00:17:58.252 ' 00:17:58.252 01:19:10 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:17:58.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.252 --rc genhtml_branch_coverage=1 00:17:58.252 --rc genhtml_function_coverage=1 00:17:58.252 --rc genhtml_legend=1 00:17:58.252 --rc geninfo_all_blocks=1 00:17:58.252 --rc geninfo_unexecuted_blocks=1 00:17:58.252 00:17:58.252 ' 00:17:58.252 01:19:10 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:17:58.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.252 --rc genhtml_branch_coverage=1 00:17:58.252 --rc genhtml_function_coverage=1 00:17:58.252 --rc genhtml_legend=1 00:17:58.252 --rc geninfo_all_blocks=1 00:17:58.252 --rc geninfo_unexecuted_blocks=1 00:17:58.252 00:17:58.252 ' 00:17:58.252 01:19:10 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:58.252 01:19:10 -- scripts/common.sh@15 -- $ shopt -s extglob 00:17:58.252 01:19:10 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:17:58.252 01:19:10 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.252 01:19:10 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.252 01:19:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.253 01:19:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.253 01:19:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.253 01:19:10 -- paths/export.sh@5 -- $ export PATH 00:17:58.253 01:19:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.253 01:19:10 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:17:58.253 01:19:10 -- common/autobuild_common.sh@486 -- $ date +%s 00:17:58.253 01:19:10 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728955150.XXXXXX 00:17:58.253 01:19:10 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728955150.brSeM7 00:17:58.253 01:19:10 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:17:58.253 01:19:10 -- common/autobuild_common.sh@492 -- $ '[' -n v22.11.4 ']' 00:17:58.253 01:19:10 -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:17:58.253 01:19:10 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:17:58.253 01:19:10 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:17:58.253 01:19:10 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:17:58.253 01:19:10 -- common/autobuild_common.sh@502 -- $ get_config_params 00:17:58.253 01:19:10 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:17:58.253 01:19:10 -- common/autotest_common.sh@10 -- $ set +x 00:17:58.253 01:19:10 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:17:58.253 01:19:10 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:17:58.253 01:19:10 -- pm/common@17 -- $ local monitor 00:17:58.253 01:19:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:58.253 01:19:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:58.253 01:19:10 -- pm/common@25 -- $ sleep 1 00:17:58.253 01:19:10 -- pm/common@21 -- $ date +%s 00:17:58.253 01:19:10 -- pm/common@21 -- $ date +%s 00:17:58.253 01:19:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728955150 00:17:58.253 01:19:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728955150 00:17:58.253 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728955150_collect-cpu-load.pm.log 00:17:58.253 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728955150_collect-vmstat.pm.log 00:17:59.192 01:19:11 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:17:59.192 01:19:11 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:17:59.192 01:19:11 -- spdk/autopackage.sh@14 -- $ timing_finish 00:17:59.192 01:19:11 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:17:59.192 01:19:11 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:17:59.192 01:19:11 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:59.452 01:19:11 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:17:59.452 01:19:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:17:59.452 01:19:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:17:59.452 01:19:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:59.452 01:19:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:17:59.452 01:19:11 -- pm/common@44 -- $ pid=102159 00:17:59.452 01:19:11 -- pm/common@50 -- $ kill -TERM 102159 00:17:59.452 01:19:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:59.452 01:19:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:17:59.452 01:19:11 -- pm/common@44 -- $ pid=102161 00:17:59.452 01:19:11 -- pm/common@50 -- $ kill -TERM 102161 00:17:59.452 + [[ -n 6164 ]] 00:17:59.452 + sudo kill 6164 00:17:59.461 [Pipeline] } 00:17:59.477 [Pipeline] // timeout 00:17:59.482 [Pipeline] } 00:17:59.495 [Pipeline] // stage 00:17:59.499 [Pipeline] } 00:17:59.512 [Pipeline] // catchError 00:17:59.521 [Pipeline] stage 00:17:59.523 [Pipeline] { (Stop VM) 00:17:59.534 [Pipeline] sh 00:17:59.823 + vagrant halt 00:18:02.361 ==> default: Halting domain... 00:18:08.949 [Pipeline] sh 00:18:09.232 + vagrant destroy -f 00:18:11.773 ==> default: Removing domain... 00:18:11.786 [Pipeline] sh 00:18:12.071 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:18:12.080 [Pipeline] } 00:18:12.094 [Pipeline] // stage 00:18:12.099 [Pipeline] } 00:18:12.113 [Pipeline] // dir 00:18:12.118 [Pipeline] } 00:18:12.132 [Pipeline] // wrap 00:18:12.137 [Pipeline] } 00:18:12.149 [Pipeline] // catchError 00:18:12.158 [Pipeline] stage 00:18:12.160 [Pipeline] { (Epilogue) 00:18:12.172 [Pipeline] sh 00:18:12.457 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:18:16.698 [Pipeline] catchError 00:18:16.700 [Pipeline] { 00:18:16.713 [Pipeline] sh 00:18:16.999 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:18:16.999 Artifacts sizes are good 00:18:17.009 [Pipeline] } 00:18:17.023 [Pipeline] // catchError 00:18:17.035 [Pipeline] archiveArtifacts 00:18:17.043 Archiving artifacts 00:18:17.159 [Pipeline] cleanWs 00:18:17.172 [WS-CLEANUP] Deleting project workspace... 00:18:17.172 [WS-CLEANUP] Deferred wipeout is used... 00:18:17.179 [WS-CLEANUP] done 00:18:17.181 [Pipeline] } 00:18:17.198 [Pipeline] // stage 00:18:17.203 [Pipeline] } 00:18:17.217 [Pipeline] // node 00:18:17.223 [Pipeline] End of Pipeline 00:18:17.280 Finished: SUCCESS